id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
2,158,152,341
https://api.github.com/repos/huggingface/datasets/issues/6699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6699/events
[]
null
2024-02-28T19:14:36Z
[]
https://github.com/huggingface/datasets/issues/6699
NONE
null
null
null
[ "If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn error occurred while generating the dataset\r\nTypeError: Couldn't cast array of type\r\nstruct<-5942: list<item: int64>, -5943: list<item: int64>, -5944: list<item: int64>, -5945: list<item: int64>, -5946: list<item: int64>, -5947: list<item: int64>, -5948: list<item: int64>, -5949: list<item: int64>: ...\r\nto\r\n{... '-5312': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), '-5313': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/runpy.py\", line 88, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py\", line 39, in <module>\r\n cli.main()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 430, in main\r\n run()\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py\", line 284, in run_file\r\n runpy.run_path(target, run_name=\"__main__\")\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 321, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 135, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/scruel/.vscode-server/extensions/ms-python.debugpy-2024.0.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py\", line 124, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 120, in <module>\r\n reader = SnippetReader(jsonl_path, npy_path)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/Code/Python/Working/llm-memory/data_reader.py\", line 85, in __init__\r\n self._dataset = Dataset.from_json(jsonl_path, features=)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/arrow_dataset.py\", line 1130, in from_json\r\n ).read()\r\n ^^^^^^\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/io/json.py\", line 59, in read\r\n self.builder.download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 1860, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/home/scruel/mambaforge/envs/vae/lib/python3.11/site-packages/datasets/builder.py\", line 2016, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```", "Hi! Our JSON parser expects all examples/rows to share the same set of columns (applies to nested columns, too), hence the error. \r\n\r\nTo read the `index` column, we would have to manually cast the input to PyArrow's `pa.map_` type, but this requires a more thorough investigation, as `pa.map_` has limited support in PyArrow." ]
`Dataset` unexpected changed dict data and may cause error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions" }
I_kwDODunzps6AosqV
null
2024-02-28T05:30:10Z
https://api.github.com/repos/huggingface/datasets/issues/6699/comments
### Describe the bug Will unexpected get keys with `None` value in the parsed json dict. ### Steps to reproduce the bug ```jsonl test.jsonl {"id": 0, "indexs": {"-1": [0, 10]}} {"id": 1, "indexs": {"-1": [0, 10]}} ``` ```python dataset = Dataset.from_json('.test.jsonl') print(dataset[0]) ``` Result: ``` {'id': 0, 'indexs': {'-1': [...], '-2': None, '-3': None, '-4': None, '-5': None, '-6': None, '-7': None, '-8': None, '-9': None, ...}} ``` Those keys with `None` value will unexpected appear in the dict. ### Expected behavior Result should be ``` {'id': 0, 'indexs': {'-1': [0, 10]}} ``` ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35 - Python version: 3.11.6 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4", "events_url": "https://api.github.com/users/scruel/events{/privacy}", "followers_url": "https://api.github.com/users/scruel/followers", "following_url": "https://api.github.com/users/scruel/following{/other_user}", "gists_url": "https://api.github.com/users/scruel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/scruel", "id": 16933298, "login": "scruel", "node_id": "MDQ6VXNlcjE2OTMzMjk4", "organizations_url": "https://api.github.com/users/scruel/orgs", "received_events_url": "https://api.github.com/users/scruel/received_events", "repos_url": "https://api.github.com/users/scruel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scruel/subscriptions", "type": "User", "url": "https://api.github.com/users/scruel" }
https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6699/timeline
open
false
6,699
null
null
null
false
2,157,752,392
https://api.github.com/repos/huggingface/datasets/issues/6698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6698/events
[]
null
2024-02-27T23:44:49Z
[]
https://github.com/huggingface/datasets/pull/6698
COLLABORATOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6698). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "CI failure is unrelated to the changes.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005499 / 0.011353 (-0.005854) | 0.003824 / 0.011008 (-0.007184) | 0.064230 / 0.038508 (0.025722) | 0.028962 / 0.023109 (0.005853) | 0.283540 / 0.275898 (0.007642) | 0.300774 / 0.323480 (-0.022706) | 0.003405 / 0.007986 (-0.004581) | 0.002796 / 0.004328 (-0.001532) | 0.049834 / 0.004250 (0.045584) | 0.045924 / 0.037052 (0.008872) | 0.274818 / 0.258489 (0.016328) | 0.306189 / 0.293841 (0.012348) | 0.028304 / 0.128546 (-0.100242) | 0.011496 / 0.075646 (-0.064150) | 0.208236 / 0.419271 (-0.211036) | 0.035720 / 0.043533 (-0.007813) | 0.261190 / 0.255139 (0.006051) | 0.281545 / 0.283200 (-0.001655) | 0.019388 / 0.141683 (-0.122295) | 1.134999 / 1.452155 (-0.317156) | 1.203053 / 1.492716 (-0.289663) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096007 / 0.018006 (0.078000) | 0.316958 / 0.000490 (0.316469) | 0.000210 / 0.000200 (0.000010) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018330 / 0.037411 (-0.019081) | 0.063299 / 0.014526 (0.048773) | 0.073833 / 0.176557 (-0.102723) | 0.122285 / 0.737135 (-0.614850) | 0.077352 / 0.296338 (-0.218987) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304487 / 0.215209 (0.089278) | 3.017666 / 2.077655 (0.940012) | 1.664292 / 1.504120 (0.160172) | 1.448446 / 1.541195 (-0.092748) | 1.435612 / 1.468490 (-0.032878) | 0.569704 / 4.584777 (-4.015073) | 2.362015 / 3.745712 (-1.383698) | 2.910380 / 5.269862 (-2.359481) | 1.814560 / 4.565676 (-2.751116) | 0.063986 / 0.424275 (-0.360289) | 0.005022 / 0.007607 (-0.002585) | 0.363528 / 0.226044 (0.137483) | 3.641940 / 2.268929 (1.373011) | 1.961589 / 55.444624 (-53.483035) | 1.603683 / 6.876477 (-5.272793) | 1.663144 / 2.142072 (-0.478928) | 0.645628 / 4.805227 (-4.159599) | 0.118759 / 6.500664 (-6.381905) | 0.042631 / 0.075469 (-0.032838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985648 / 1.841788 (-0.856140) | 13.082558 / 8.074308 (5.008250) | 9.909811 / 10.191392 (-0.281581) | 0.131340 / 0.680424 (-0.549083) | 0.013983 / 0.534201 (-0.520218) | 0.289869 / 0.579283 (-0.289414) | 0.271775 / 0.434364 (-0.162589) | 0.334853 / 0.540337 (-0.205485) | 0.457017 / 1.386936 (-0.929919) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005580 / 0.011353 (-0.005773) | 0.003788 / 0.011008 (-0.007221) | 0.049401 / 0.038508 (0.010893) | 0.030372 / 0.023109 (0.007263) | 0.278554 / 0.275898 (0.002655) | 0.302462 / 0.323480 (-0.021018) | 0.004412 / 0.007986 (-0.003573) | 0.002825 / 0.004328 (-0.001504) | 0.047826 / 0.004250 (0.043576) | 0.047903 / 0.037052 (0.010851) | 0.293098 / 0.258489 (0.034609) | 0.322777 / 0.293841 (0.028936) | 0.030010 / 0.128546 (-0.098536) | 0.011187 / 0.075646 (-0.064459) | 0.057639 / 0.419271 (-0.361632) | 0.059693 / 0.043533 (0.016160) | 0.280288 / 0.255139 (0.025149) | 0.294022 / 0.283200 (0.010823) | 0.019635 / 0.141683 (-0.122048) | 1.154733 / 1.452155 (-0.297422) | 1.200808 / 1.492716 (-0.291908) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099682 / 0.018006 (0.081676) | 0.319521 / 0.000490 (0.319031) | 0.000224 / 0.000200 (0.000024) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022042 / 0.037411 (-0.015370) | 0.078842 / 0.014526 (0.064317) | 0.088715 / 0.176557 (-0.087841) | 0.126832 / 0.737135 (-0.610303) | 0.089217 / 0.296338 (-0.207122) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300099 / 0.215209 (0.084890) | 2.907746 / 2.077655 (0.830092) | 1.619418 / 1.504120 (0.115298) | 1.495693 / 1.541195 (-0.045501) | 1.544956 / 1.468490 (0.076466) | 0.556652 / 4.584777 (-4.028124) | 2.414408 / 3.745712 (-1.331304) | 2.737227 / 5.269862 (-2.532635) | 1.763187 / 4.565676 (-2.802490) | 0.062207 / 0.424275 (-0.362069) | 0.005076 / 0.007607 (-0.002531) | 0.349880 / 0.226044 (0.123836) | 3.425355 / 2.268929 (1.156427) | 1.972094 / 55.444624 (-53.472531) | 1.710650 / 6.876477 (-5.165827) | 1.902218 / 2.142072 (-0.239855) | 0.640699 / 4.805227 (-4.164529) | 0.117879 / 6.500664 (-6.382785) | 0.042412 / 0.075469 (-0.033057) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030131 / 1.841788 (-0.811656) | 12.750637 / 8.074308 (4.676329) | 10.352636 / 10.191392 (0.161244) | 0.141139 / 0.680424 (-0.539285) | 0.015343 / 0.534201 (-0.518858) | 0.294931 / 0.579283 (-0.284352) | 0.275237 / 0.434364 (-0.159127) | 0.336669 / 0.540337 (-0.203668) | 0.429945 / 1.386936 (-0.956991) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9c424fa517a1b8517c89545f979e0c8c7d90c3e3 \"CML watermark\")\n" ]
Faster `xlistdir`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6698/reactions" }
PR_kwDODunzps5oG6Xt
{ "diff_url": "https://github.com/huggingface/datasets/pull/6698.diff", "html_url": "https://github.com/huggingface/datasets/pull/6698", "merged_at": "2024-02-27T23:38:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/6698.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6698" }
2024-02-27T22:55:08Z
https://api.github.com/repos/huggingface/datasets/issues/6698/comments
Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6698/timeline
closed
false
6,698
null
2024-02-27T23:38:14Z
null
true
2,157,322,224
https://api.github.com/repos/huggingface/datasets/issues/6697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6697/events
[]
null
2024-02-29T17:32:42Z
[]
https://github.com/huggingface/datasets/issues/6697
NONE
completed
null
null
[ "FWIW, I run `load_dataset(\"llm-blender/mix-instruct\")` and it ran successfully.\r\nCan you clear your cache and try again?\r\n\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.17.0\r\n- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 1.5.3\r\n- `fsspec` version: 2023.10.0", "It is working on the Kaggle GPU instance but gives this same error when running on the CPU instance. Still to run it on Kaggle you require to install the latest versions of datasets and transformers.", "This error means that `fsspec>=2023.12.0` is installed, which is incompatible with the current releases (the next `datasets` release will be the first to support it). In the meantime, downgrading `fsspec` (`pip install fsspec<=2023.12.0`) should fix the issue.", "@mariosasko Thanks I got it to work with installing that version of fsspec." ]
Unable to Load Dataset in Kaggle
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6697/reactions" }
I_kwDODunzps6Alh_w
null
2024-02-27T18:19:34Z
https://api.github.com/repos/huggingface/datasets/issues/6697/comments
### Describe the bug Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook. Get this Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[8], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("llm-blender/mix-instruct") File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1661 ignore_verifications = ignore_verifications or save_infos 1663 # Create a dataset builder -> 1664 builder_instance = load_dataset_builder( 1665 path=path, 1666 name=name, 1667 data_dir=data_dir, 1668 data_files=data_files, 1669 cache_dir=cache_dir, 1670 features=features, 1671 download_config=download_config, 1672 download_mode=download_mode, 1673 revision=revision, 1674 use_auth_token=use_auth_token, 1675 **config_kwargs, 1676 ) 1678 # Return iterable dataset in case of streaming 1679 if streaming: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1488 download_config = download_config.copy() if download_config else DownloadConfig() 1489 download_config.use_auth_token = use_auth_token -> 1490 dataset_module = dataset_module_factory( 1491 path, 1492 revision=revision, 1493 download_config=download_config, 1494 download_mode=download_mode, 1495 data_dir=data_dir, 1496 data_files=data_files, 1497 ) 1499 # Get dataset builder class from the processing script 1500 builder_cls = import_main_class(dataset_module.module_path) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1242, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1237 if isinstance(e1, FileNotFoundError): 1238 raise FileNotFoundError( 1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1241 ) from None -> 1242 raise e1 from None 1243 else: 1244 raise FileNotFoundError( 1245 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory." 1246 ) File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1230, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1215 return HubDatasetModuleFactoryWithScript( 1216 path, 1217 revision=revision, (...) 1220 dynamic_modules_path=dynamic_modules_path, 1221 ).get_module() 1222 else: 1223 return HubDatasetModuleFactoryWithoutScript( 1224 path, 1225 revision=revision, 1226 data_dir=data_dir, 1227 data_files=data_files, 1228 download_config=download_config, 1229 download_mode=download_mode, -> 1230 ).get_module() 1231 except Exception as e1: # noqa: all the attempts failed, before raising the error we should check if the module is already cached. 1232 try: File /opt/conda/lib/python3.10/site-packages/datasets/load.py:846, in HubDatasetModuleFactoryWithoutScript.get_module(self) 836 token = self.download_config.use_auth_token 837 hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info( 838 self.name, 839 revision=self.revision, 840 token=token, 841 timeout=100.0, 842 ) 843 patterns = ( 844 sanitize_patterns(self.data_files) 845 if self.data_files is not None --> 846 else get_patterns_in_dataset_repository(hfh_dataset_info) 847 ) 848 data_files = DataFilesDict.from_hf_repo( 849 patterns, 850 dataset_info=hfh_dataset_info, 851 allowed_extensions=ALL_ALLOWED_EXTENSIONS, 852 ) 853 infered_module_names = { 854 key: infer_module_for_data_files(data_files_list, use_auth_token=self.download_config.use_auth_token) 855 for key, data_files_list in data_files.items() 856 } File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:471, in get_patterns_in_dataset_repository(dataset_info) 469 resolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info) 470 try: --> 471 return _get_data_files_patterns(resolver) 472 except FileNotFoundError: 473 raise FileNotFoundError( 474 f"The dataset repository at '{dataset_info.id}' doesn't contain any data file." 475 ) from None File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:99, in _get_data_files_patterns(pattern_resolver) 97 try: 98 for pattern in patterns: ---> 99 data_files = pattern_resolver(pattern) 100 if len(data_files) > 0: 101 non_empty_splits.append(split) File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:303, in _resolve_single_pattern_in_dataset_repository(dataset_info, pattern, allowed_extensions) 301 data_files_ignore = FILES_TO_IGNORE 302 fs = HfFileSystem(repo_info=dataset_info) --> 303 glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)] 304 matched_paths = [ 305 filepath 306 for filepath in glob_iter 307 if filepath.name not in data_files_ignore and not filepath.name.startswith(".") 308 ] 309 if allowed_extensions is not None: File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:606, in AbstractFileSystem.glob(self, path, maxdepth, **kwargs) 602 depth = None 604 allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) --> 606 pattern = glob_translate(path + ("/" if ends_with_sep else "")) 607 pattern = re.compile(pattern) 609 out = { 610 p: info 611 for p, info in sorted(allpaths.items()) (...) 618 ) 619 } File /opt/conda/lib/python3.10/site-packages/fsspec/utils.py:734, in glob_translate(pat) 732 continue 733 elif "**" in part: --> 734 raise ValueError( 735 "Invalid pattern: '**' can only be an entire path component" 736 ) 737 if part: 738 results.extend(_translate(part, f"{not_sep}*", not_sep)) ValueError: Invalid pattern: '**' can only be an entire path component ``` ``` After loading this dataset ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("llm-blender/mix-instruct") ``` ### Expected behavior The dataset should load with desired split. ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-5.15.133+-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4", "events_url": "https://api.github.com/users/vrunm/events{/privacy}", "followers_url": "https://api.github.com/users/vrunm/followers", "following_url": "https://api.github.com/users/vrunm/following{/other_user}", "gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vrunm", "id": 97465624, "login": "vrunm", "node_id": "U_kgDOBc81GA", "organizations_url": "https://api.github.com/users/vrunm/orgs", "received_events_url": "https://api.github.com/users/vrunm/received_events", "repos_url": "https://api.github.com/users/vrunm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrunm/subscriptions", "type": "User", "url": "https://api.github.com/users/vrunm" }
https://api.github.com/repos/huggingface/datasets/issues/6697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6697/timeline
closed
false
6,697
null
2024-02-29T17:32:41Z
null
false
2,154,161,357
https://api.github.com/repos/huggingface/datasets/issues/6696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6696/events
[]
null
2024-02-28T06:45:23Z
[]
https://github.com/huggingface/datasets/pull/6696
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6696). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005057 / 0.011353 (-0.006296) | 0.003665 / 0.011008 (-0.007343) | 0.063217 / 0.038508 (0.024709) | 0.028789 / 0.023109 (0.005679) | 0.233597 / 0.275898 (-0.042301) | 0.254792 / 0.323480 (-0.068687) | 0.003065 / 0.007986 (-0.004921) | 0.002686 / 0.004328 (-0.001642) | 0.050182 / 0.004250 (0.045932) | 0.042204 / 0.037052 (0.005151) | 0.254262 / 0.258489 (-0.004227) | 0.277099 / 0.293841 (-0.016742) | 0.027564 / 0.128546 (-0.100982) | 0.010768 / 0.075646 (-0.064878) | 0.207302 / 0.419271 (-0.211969) | 0.035737 / 0.043533 (-0.007796) | 0.242388 / 0.255139 (-0.012751) | 0.259833 / 0.283200 (-0.023367) | 0.019833 / 0.141683 (-0.121850) | 1.135928 / 1.452155 (-0.316227) | 1.162851 / 1.492716 (-0.329865) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089209 / 0.018006 (0.071202) | 0.300493 / 0.000490 (0.300003) | 0.000216 / 0.000200 (0.000016) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017968 / 0.037411 (-0.019444) | 0.061773 / 0.014526 (0.047247) | 0.073835 / 0.176557 (-0.102722) | 0.118592 / 0.737135 (-0.618544) | 0.073606 / 0.296338 (-0.222732) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287858 / 0.215209 (0.072649) | 2.822917 / 2.077655 (0.745262) | 1.485259 / 1.504120 (-0.018861) | 1.355922 / 1.541195 (-0.185273) | 1.364008 / 1.468490 (-0.104482) | 0.557713 / 4.584777 (-4.027064) | 2.378972 / 3.745712 (-1.366741) | 2.737218 / 5.269862 (-2.532643) | 1.718317 / 4.565676 (-2.847359) | 0.062362 / 0.424275 (-0.361913) | 0.004992 / 0.007607 (-0.002615) | 0.350765 / 0.226044 (0.124721) | 3.387579 / 2.268929 (1.118650) | 1.860408 / 55.444624 (-53.584216) | 1.569355 / 6.876477 (-5.307122) | 1.593013 / 2.142072 (-0.549059) | 0.639325 / 4.805227 (-4.165902) | 0.121769 / 6.500664 (-6.378895) | 0.042148 / 0.075469 (-0.033322) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.990594 / 1.841788 (-0.851194) | 11.460904 / 8.074308 (3.386596) | 9.438691 / 10.191392 (-0.752701) | 0.141884 / 0.680424 (-0.538540) | 0.013725 / 0.534201 (-0.520476) | 0.288847 / 0.579283 (-0.290436) | 0.278815 / 0.434364 (-0.155549) | 0.337108 / 0.540337 (-0.203229) | 0.441659 / 1.386936 (-0.945277) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005265 / 0.011353 (-0.006088) | 0.003734 / 0.011008 (-0.007274) | 0.049365 / 0.038508 (0.010857) | 0.030483 / 0.023109 (0.007373) | 0.275085 / 0.275898 (-0.000813) | 0.296004 / 0.323480 (-0.027475) | 0.004964 / 0.007986 (-0.003022) | 0.002542 / 0.004328 (-0.001787) | 0.048734 / 0.004250 (0.044483) | 0.044098 / 0.037052 (0.007046) | 0.292517 / 0.258489 (0.034028) | 0.319992 / 0.293841 (0.026151) | 0.029552 / 0.128546 (-0.098994) | 0.010669 / 0.075646 (-0.064977) | 0.058887 / 0.419271 (-0.360385) | 0.051163 / 0.043533 (0.007630) | 0.277266 / 0.255139 (0.022127) | 0.295347 / 0.283200 (0.012147) | 0.018403 / 0.141683 (-0.123280) | 1.151979 / 1.452155 (-0.300176) | 1.204583 / 1.492716 (-0.288134) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091157 / 0.018006 (0.073151) | 0.300109 / 0.000490 (0.299619) | 0.000211 / 0.000200 (0.000011) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021521 / 0.037411 (-0.015890) | 0.074954 / 0.014526 (0.060428) | 0.087010 / 0.176557 (-0.089546) | 0.125853 / 0.737135 (-0.611282) | 0.087877 / 0.296338 (-0.208461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297890 / 0.215209 (0.082681) | 2.912159 / 2.077655 (0.834504) | 1.619311 / 1.504120 (0.115192) | 1.501726 / 1.541195 (-0.039468) | 1.494143 / 1.468490 (0.025652) | 0.566744 / 4.584777 (-4.018033) | 2.497594 / 3.745712 (-1.248118) | 2.631403 / 5.269862 (-2.638459) | 1.727896 / 4.565676 (-2.837780) | 0.065937 / 0.424275 (-0.358339) | 0.005023 / 0.007607 (-0.002585) | 0.345747 / 0.226044 (0.119702) | 3.417615 / 2.268929 (1.148686) | 1.949970 / 55.444624 (-53.494654) | 1.680019 / 6.876477 (-5.196457) | 1.789879 / 2.142072 (-0.352193) | 0.648053 / 4.805227 (-4.157174) | 0.117408 / 6.500664 (-6.383256) | 0.040681 / 0.075469 (-0.034788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.012535 / 1.841788 (-0.829252) | 11.935819 / 8.074308 (3.861511) | 10.241452 / 10.191392 (0.050060) | 0.130956 / 0.680424 (-0.549468) | 0.015396 / 0.534201 (-0.518805) | 0.289166 / 0.579283 (-0.290117) | 0.274149 / 0.434364 (-0.160215) | 0.325844 / 0.540337 (-0.214493) | 0.424919 / 1.386936 (-0.962017) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cb834d9c63ab8cb14725ae8e4fc2da8672892a6d \"CML watermark\")\n" ]
Make JSON builder support an array of strings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6696/reactions" }
PR_kwDODunzps5n6ipH
{ "diff_url": "https://github.com/huggingface/datasets/pull/6696.diff", "html_url": "https://github.com/huggingface/datasets/pull/6696", "merged_at": "2024-02-28T06:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6696.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6696" }
2024-02-26T13:18:31Z
https://api.github.com/repos/huggingface/datasets/issues/6696/comments
Support JSON file with an array of strings. Fix #6695.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6696/timeline
closed
false
6,696
null
2024-02-28T06:39:12Z
null
true
2,154,075,509
https://api.github.com/repos/huggingface/datasets/issues/6695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6695/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-03-08T14:16:25Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6695
MEMBER
completed
null
null
[ "https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, but not the traceback in `details`... Do you remember the error message, or the underlying exception, we had?" ]
Support JSON file with an array of strings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions" }
I_kwDODunzps6AZJV1
null
2024-02-26T12:35:11Z
https://api.github.com/repos/huggingface/datasets/issues/6695/comments
Support loading a dataset from a JSON file with an array of strings. See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6695/timeline
closed
false
6,695
null
2024-02-28T06:39:13Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,153,086,984
https://api.github.com/repos/huggingface/datasets/issues/6694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6694/events
[]
null
2024-02-29T16:52:58Z
[]
https://github.com/huggingface/datasets/pull/6694
NONE
null
false
null
[ "Hi! You can find a reason why we are against this feature in https://github.com/huggingface/datasets/issues/3449. \r\n\r\n> It's too cumbersome to write this command every time we perform a dataset merging operation\r\n\r\nExplicit is better than implicit, so this isn't a good enough reason. \r\n\r\nThanks for the effort nonetheless :)!" ]
__add__ for Dataset, IterableDataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6694/reactions" }
PR_kwDODunzps5n23Jz
{ "diff_url": "https://github.com/huggingface/datasets/pull/6694.diff", "html_url": "https://github.com/huggingface/datasets/pull/6694", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6694.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6694" }
2024-02-26T01:46:55Z
https://api.github.com/repos/huggingface/datasets/issues/6694/comments
It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.` ```python from datasets import load_dataset bookcorpus = load_dataset("bookcorpus", split="train") wiki = load_dataset("wikimedia/wikipedia", "20231101.ab", split="train") wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"]) # only keep the 'text' column bookcorpus + wiki #Dataset({ # features: ['text'], # num_rows: 74004228 #}) #Dataset({ # features: ['text'], # num_rows: 6152 #}) #Dataset({ # features: ['text'], # num_rows: 74010380 #}) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/79557937?v=4", "events_url": "https://api.github.com/users/oh-gnues-iohc/events{/privacy}", "followers_url": "https://api.github.com/users/oh-gnues-iohc/followers", "following_url": "https://api.github.com/users/oh-gnues-iohc/following{/other_user}", "gists_url": "https://api.github.com/users/oh-gnues-iohc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/oh-gnues-iohc", "id": 79557937, "login": "oh-gnues-iohc", "node_id": "MDQ6VXNlcjc5NTU3OTM3", "organizations_url": "https://api.github.com/users/oh-gnues-iohc/orgs", "received_events_url": "https://api.github.com/users/oh-gnues-iohc/received_events", "repos_url": "https://api.github.com/users/oh-gnues-iohc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/oh-gnues-iohc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oh-gnues-iohc/subscriptions", "type": "User", "url": "https://api.github.com/users/oh-gnues-iohc" }
https://api.github.com/repos/huggingface/datasets/issues/6694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6694/timeline
open
false
6,694
null
null
null
true
2,152,887,712
https://api.github.com/repos/huggingface/datasets/issues/6693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6693/events
[]
null
2024-02-25T19:57:12Z
[]
https://github.com/huggingface/datasets/pull/6693
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6693). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005069 / 0.011353 (-0.006284) | 0.003682 / 0.011008 (-0.007326) | 0.063733 / 0.038508 (0.025225) | 0.030377 / 0.023109 (0.007268) | 0.242962 / 0.275898 (-0.032936) | 0.262865 / 0.323480 (-0.060615) | 0.004760 / 0.007986 (-0.003225) | 0.002772 / 0.004328 (-0.001557) | 0.049094 / 0.004250 (0.044843) | 0.041093 / 0.037052 (0.004041) | 0.260423 / 0.258489 (0.001934) | 0.283908 / 0.293841 (-0.009933) | 0.027409 / 0.128546 (-0.101138) | 0.010548 / 0.075646 (-0.065098) | 0.208637 / 0.419271 (-0.210634) | 0.035386 / 0.043533 (-0.008147) | 0.242352 / 0.255139 (-0.012787) | 0.264201 / 0.283200 (-0.018999) | 0.017822 / 0.141683 (-0.123860) | 1.140792 / 1.452155 (-0.311363) | 1.166782 / 1.492716 (-0.325934) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094727 / 0.018006 (0.076720) | 0.308548 / 0.000490 (0.308059) | 0.000213 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018106 / 0.037411 (-0.019305) | 0.062057 / 0.014526 (0.047531) | 0.073821 / 0.176557 (-0.102735) | 0.121269 / 0.737135 (-0.615867) | 0.074062 / 0.296338 (-0.222277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282978 / 0.215209 (0.067768) | 2.788626 / 2.077655 (0.710971) | 1.479756 / 1.504120 (-0.024364) | 1.360620 / 1.541195 (-0.180575) | 1.363996 / 1.468490 (-0.104494) | 0.571646 / 4.584777 (-4.013131) | 2.430630 / 3.745712 (-1.315083) | 2.783909 / 5.269862 (-2.485953) | 1.744617 / 4.565676 (-2.821060) | 0.062771 / 0.424275 (-0.361504) | 0.004978 / 0.007607 (-0.002629) | 0.347929 / 0.226044 (0.121884) | 3.368837 / 2.268929 (1.099908) | 1.855635 / 55.444624 (-53.588990) | 1.581555 / 6.876477 (-5.294922) | 1.589888 / 2.142072 (-0.552184) | 0.655821 / 4.805227 (-4.149406) | 0.118990 / 6.500664 (-6.381674) | 0.042191 / 0.075469 (-0.033278) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991099 / 1.841788 (-0.850688) | 11.627919 / 8.074308 (3.553611) | 9.554180 / 10.191392 (-0.637212) | 0.140541 / 0.680424 (-0.539882) | 0.014264 / 0.534201 (-0.519937) | 0.288465 / 0.579283 (-0.290818) | 0.266400 / 0.434364 (-0.167964) | 0.324400 / 0.540337 (-0.215938) | 0.423158 / 1.386936 (-0.963778) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005588 / 0.011353 (-0.005765) | 0.003784 / 0.011008 (-0.007224) | 0.049961 / 0.038508 (0.011453) | 0.031215 / 0.023109 (0.008105) | 0.280859 / 0.275898 (0.004961) | 0.306416 / 0.323480 (-0.017063) | 0.004310 / 0.007986 (-0.003676) | 0.002884 / 0.004328 (-0.001445) | 0.049662 / 0.004250 (0.045412) | 0.046611 / 0.037052 (0.009559) | 0.293353 / 0.258489 (0.034864) | 0.327839 / 0.293841 (0.033998) | 0.050784 / 0.128546 (-0.077763) | 0.010890 / 0.075646 (-0.064757) | 0.059612 / 0.419271 (-0.359659) | 0.033175 / 0.043533 (-0.010358) | 0.281085 / 0.255139 (0.025946) | 0.302746 / 0.283200 (0.019547) | 0.019201 / 0.141683 (-0.122481) | 1.126722 / 1.452155 (-0.325433) | 1.225678 / 1.492716 (-0.267038) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094335 / 0.018006 (0.076329) | 0.304774 / 0.000490 (0.304285) | 0.000207 / 0.000200 (0.000007) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021648 / 0.037411 (-0.015763) | 0.077920 / 0.014526 (0.063394) | 0.087125 / 0.176557 (-0.089432) | 0.125481 / 0.737135 (-0.611654) | 0.089415 / 0.296338 (-0.206924) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304955 / 0.215209 (0.089746) | 2.992587 / 2.077655 (0.914932) | 1.654609 / 1.504120 (0.150490) | 1.509114 / 1.541195 (-0.032081) | 1.530906 / 1.468490 (0.062416) | 0.572092 / 4.584777 (-4.012685) | 2.477902 / 3.745712 (-1.267810) | 2.731363 / 5.269862 (-2.538498) | 1.750000 / 4.565676 (-2.815677) | 0.063662 / 0.424275 (-0.360613) | 0.005008 / 0.007607 (-0.002600) | 0.353066 / 0.226044 (0.127022) | 3.528309 / 2.268929 (1.259380) | 2.009238 / 55.444624 (-53.435387) | 1.717792 / 6.876477 (-5.158685) | 1.861699 / 2.142072 (-0.280373) | 0.667392 / 4.805227 (-4.137835) | 0.119197 / 6.500664 (-6.381467) | 0.041131 / 0.075469 (-0.034338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032182 / 1.841788 (-0.809605) | 12.042613 / 8.074308 (3.968305) | 10.256293 / 10.191392 (0.064901) | 0.141180 / 0.680424 (-0.539244) | 0.015005 / 0.534201 (-0.519196) | 0.290081 / 0.579283 (-0.289202) | 0.281081 / 0.434364 (-0.153283) | 0.331425 / 0.540337 (-0.208912) | 0.418674 / 1.386936 (-0.968262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ad5b221c01a183a66cbf52a6d708f94e0cff0b53 \"CML watermark\")\n" ]
Update the print message for chunked_dataset in process.mdx
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6693/reactions" }
PR_kwDODunzps5n2ObO
{ "diff_url": "https://github.com/huggingface/datasets/pull/6693.diff", "html_url": "https://github.com/huggingface/datasets/pull/6693", "merged_at": "2024-02-25T19:51:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6693" }
2024-02-25T18:37:07Z
https://api.github.com/repos/huggingface/datasets/issues/6693/comments
Update documentation to align with `Dataset.__repr__` change after #423
{ "avatar_url": "https://avatars.githubusercontent.com/u/142939562?v=4", "events_url": "https://api.github.com/users/gzbfgjf2/events{/privacy}", "followers_url": "https://api.github.com/users/gzbfgjf2/followers", "following_url": "https://api.github.com/users/gzbfgjf2/following{/other_user}", "gists_url": "https://api.github.com/users/gzbfgjf2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gzbfgjf2", "id": 142939562, "login": "gzbfgjf2", "node_id": "U_kgDOCIUVqg", "organizations_url": "https://api.github.com/users/gzbfgjf2/orgs", "received_events_url": "https://api.github.com/users/gzbfgjf2/received_events", "repos_url": "https://api.github.com/users/gzbfgjf2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gzbfgjf2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gzbfgjf2/subscriptions", "type": "User", "url": "https://api.github.com/users/gzbfgjf2" }
https://api.github.com/repos/huggingface/datasets/issues/6693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6693/timeline
closed
false
6,693
null
2024-02-25T19:51:02Z
null
true
2,152,270,987
https://api.github.com/repos/huggingface/datasets/issues/6692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6692/events
[]
null
2024-02-26T15:33:50Z
[]
https://github.com/huggingface/datasets/pull/6692
NONE
null
false
null
[ "Hi @harsh1504660,\r\n\r\nThanks for your work, but this functionality already exists. See my comment in the corresponding issue: https://github.com/huggingface/datasets/issues/6691#issuecomment-1963449923\r\n\r\nNext time you would like to contribute, I would suggest you take on an issue that is previously validated by one of the maintainers. Thanks anyway." ]
Enhancement: Enable loading TSV files in load_dataset()
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6692/reactions" }
PR_kwDODunzps5n0XN1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6692.diff", "html_url": "https://github.com/huggingface/datasets/pull/6692", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6692.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6692" }
2024-02-24T11:38:59Z
https://api.github.com/repos/huggingface/datasets/issues/6692/comments
Fix #6691
{ "avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4", "events_url": "https://api.github.com/users/harsh1504660/events{/privacy}", "followers_url": "https://api.github.com/users/harsh1504660/followers", "following_url": "https://api.github.com/users/harsh1504660/following{/other_user}", "gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/harsh1504660", "id": 77767961, "login": "harsh1504660", "node_id": "MDQ6VXNlcjc3NzY3OTYx", "organizations_url": "https://api.github.com/users/harsh1504660/orgs", "received_events_url": "https://api.github.com/users/harsh1504660/received_events", "repos_url": "https://api.github.com/users/harsh1504660/repos", "site_admin": false, "starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions", "type": "User", "url": "https://api.github.com/users/harsh1504660" }
https://api.github.com/repos/huggingface/datasets/issues/6692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6692/timeline
closed
false
6,692
null
2024-02-26T07:14:03Z
null
true
2,152,134,041
https://api.github.com/repos/huggingface/datasets/issues/6691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6691/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-02-26T07:15:07Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4", "events_url": "https://api.github.com/users/harsh1504660/events{/privacy}", "followers_url": "https://api.github.com/users/harsh1504660/followers", "following_url": "https://api.github.com/users/harsh1504660/following{/other_user}", "gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/harsh1504660", "id": 77767961, "login": "harsh1504660", "node_id": "MDQ6VXNlcjc3NzY3OTYx", "organizations_url": "https://api.github.com/users/harsh1504660/orgs", "received_events_url": "https://api.github.com/users/harsh1504660/received_events", "repos_url": "https://api.github.com/users/harsh1504660/repos", "site_admin": false, "starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions", "type": "User", "url": "https://api.github.com/users/harsh1504660" } ]
https://github.com/huggingface/datasets/issues/6691
NONE
completed
null
null
[ "#self-assign", "Hi @dipsivenkatesh,\r\n\r\nPlease note that this functionality is already implemented. Our CSV builder uses `pandas.read_csv` under the hood, and you can pass the parameter `delimiter=\"\\t\"` to read TSV files.\r\n\r\nSee the list of CSV config parameters in our docs: https://huggingface.co/docs/datasets/package_reference/loading_methods#datasets.packaged_modules.csv.CsvConfig" ]
load_dataset() does not support tsv
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6691/reactions" }
I_kwDODunzps6ARvWZ
null
2024-02-24T05:56:04Z
https://api.github.com/repos/huggingface/datasets/issues/6691/comments
### Feature request the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values). ### Motivation cant easily load files of type tsv, have to convert them to another type like csv then load ### Your contribution Can try by raising a PR with a little help, currently went through the code but didn't fully understand
{ "avatar_url": "https://avatars.githubusercontent.com/u/26873178?v=4", "events_url": "https://api.github.com/users/dipsivenkatesh/events{/privacy}", "followers_url": "https://api.github.com/users/dipsivenkatesh/followers", "following_url": "https://api.github.com/users/dipsivenkatesh/following{/other_user}", "gists_url": "https://api.github.com/users/dipsivenkatesh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dipsivenkatesh", "id": 26873178, "login": "dipsivenkatesh", "node_id": "MDQ6VXNlcjI2ODczMTc4", "organizations_url": "https://api.github.com/users/dipsivenkatesh/orgs", "received_events_url": "https://api.github.com/users/dipsivenkatesh/received_events", "repos_url": "https://api.github.com/users/dipsivenkatesh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dipsivenkatesh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dipsivenkatesh/subscriptions", "type": "User", "url": "https://api.github.com/users/dipsivenkatesh" }
https://api.github.com/repos/huggingface/datasets/issues/6691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6691/timeline
closed
false
6,691
null
2024-02-26T07:09:35Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4", "events_url": "https://api.github.com/users/harsh1504660/events{/privacy}", "followers_url": "https://api.github.com/users/harsh1504660/followers", "following_url": "https://api.github.com/users/harsh1504660/following{/other_user}", "gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/harsh1504660", "id": 77767961, "login": "harsh1504660", "node_id": "MDQ6VXNlcjc3NzY3OTYx", "organizations_url": "https://api.github.com/users/harsh1504660/orgs", "received_events_url": "https://api.github.com/users/harsh1504660/received_events", "repos_url": "https://api.github.com/users/harsh1504660/repos", "site_admin": false, "starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions", "type": "User", "url": "https://api.github.com/users/harsh1504660" }
false
2,150,800,065
https://api.github.com/repos/huggingface/datasets/issues/6690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6690/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-04-12T15:27:05Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6690
MEMBER
completed
null
null
[]
Add function to convert a script-dataset to Parquet
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions" }
I_kwDODunzps6AMprB
null
2024-02-23T10:28:20Z
https://api.github.com/repos/huggingface/datasets/issues/6690/comments
Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet"
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6690/timeline
closed
false
6,690
null
2024-04-12T15:27:05Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,149,581,147
https://api.github.com/repos/huggingface/datasets/issues/6689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6689/events
[]
null
2024-03-07T14:54:16Z
[]
https://github.com/huggingface/datasets/issues/6689
NONE
completed
null
null
[ "The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n\r\nThat's why it asks for zstandard to be installed.\r\n\r\nThough I'm intrigued that you manage to load the dataset without zstandard installed. Maybe `pyarrow` that we use to load JSON data under the hood got support for zstandard at one point.", "> The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n> \r\n> That's why it asks for zstandard to be installed.\r\n> \r\n> Though I'm intrigued that you manage to load the dataset without zstandard installed. Maybe `pyarrow` that we use to load JSON data under the hood got support for zstandard at one point.\r\n\r\nQuestion, then.\r\n\r\nWhen I loaded this dataset back in October, it downloaded all the files, and then loaded into memory just fine.\r\n\r\nNOW, it has to sit there and unpack all these zstd files (3.6TB worth). Further, when they're in my harddrive, they're regular json files. It's only when looking at the LFS, or when the loading script runs, that I get asked to install zstd.\r\n\r\nMy question is, **is this normal?** As far as I can tell, there's no reason the dataset or the loading methods should have changed between then and now. Was my old behavior flawed, and the new behavior correct?\r\n\r\nI mean, I got it working eventually, but it was pulling teeth, and it still doesn't load right, as I had to unpack each chunk separately, so there's no clean mapping between the chunks and the broader dataset.", "The `ZstdExtractor` has been added 3 years ago and we haven't touched it since then. Same for the JSON loader.\r\n\r\n`zstandard` is required as soon as you try to load a file with the `.zstd` extension or if a file starts with the Zstandard magic number `b\"\\x28\\xb5\\x2f\\xfd\"` (used to recognize Zstandard files).\r\n\r\nNote that the extraction only has to happen once - if you reload the dataset it will be reloaded from your cache directly.\r\n\r\nNot sure what happened between October and now unfortunately", "Understood, thank you for clarifying that for me.\r\n\r\nI'll look into how best to collate my stack of batches w/o creating duplicate arrow tables for each one." ]
.load_dataset() method defaults to zstandard
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6689/reactions" }
I_kwDODunzps6AIAFb
null
2024-02-22T17:39:27Z
https://api.github.com/repos/huggingface/datasets/issues/6689/comments
### Describe the bug Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets. This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it. My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself. Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue. ``` class Extractor: # Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip) extractors: Dict[str, Type[BaseExtractor]] = { "tar": TarExtractor, "gzip": GzipExtractor, "zip": ZipExtractor, "xz": XzExtractor, #"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages "rar": RarExtractor, "bz2": Bzip2Extractor, "7z": SevenZipExtractor, # <Added version="2.4.0"/> "lz4": Lz4Extractor, # <Added version="2.4.0"/> } ``` ### Steps to reproduce the bug ''' from datasaets import load_dataset load_dataset(path="/cerebras/SlimPajama-627B") ''' This alone should trigger the error on any system that does not have zstandard pip installed. ### Expected behavior This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use. ### Environment info - `datasets` version: 2.17.1 - Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35 - Python version: 3.12.0 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4", "events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}", "followers_url": "https://api.github.com/users/ElleLeonne/followers", "following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}", "gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ElleLeonne", "id": 87243032, "login": "ElleLeonne", "node_id": "MDQ6VXNlcjg3MjQzMDMy", "organizations_url": "https://api.github.com/users/ElleLeonne/orgs", "received_events_url": "https://api.github.com/users/ElleLeonne/received_events", "repos_url": "https://api.github.com/users/ElleLeonne/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions", "type": "User", "url": "https://api.github.com/users/ElleLeonne" }
https://api.github.com/repos/huggingface/datasets/issues/6689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6689/timeline
closed
false
6,689
null
2024-03-07T14:54:15Z
null
false
2,148,609,859
https://api.github.com/repos/huggingface/datasets/issues/6688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6688/events
[]
null
2024-02-22T15:56:21Z
[]
https://github.com/huggingface/datasets/issues/6688
NONE
null
null
null
[ "Hi, this is expected behavior since all the tensors are converted to Arrow data (the storage type behind a Dataset).\r\n\r\nTo get pytorch tensors back, you can set the dataset format to \"torch\":\r\n\r\n```python\r\nds = ds.with_format(\"torch\")\r\n```", "Thanks. Just one additional question. During the pipeline `<framework> -> arrow -> <framework>`, does `.with_format` zero-copies the tensors or is it a deep copy? And is this behavior framework-dependent?\r\n\r\nThanks again.", "We do zero-copy Arrow <-> NumPy <-> PyTorch when the output dtype matches the original dtype, but for other frameworks it depends. For example JAX doesn't allow zero-copy NumPy -> JAX at all IIRC.\r\n\r\nCurrently tokenized data are formatted using a copy though, since tokens are stored as int32 and returned as int64 torch tensors." ]
Tensor type (e.g. from `return_tensors`) ignored in map
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6688/reactions" }
I_kwDODunzps6AES9D
null
2024-02-22T09:27:57Z
https://api.github.com/repos/huggingface/datasets/issues/6688/comments
### Describe the bug I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument. If this is an expected behaviour (e.g., for caching/Arrow compatibility/etc.) it should be clearly documented. For example, current documentation (see [here](https://huggingface.co/docs/datasets/v2.17.1/en/nlp_process#map)) clearly state to "set `return_tensors="np"` when you tokenize your text" to have Numpy arrays. ### Steps to reproduce the bug ```py # %%% import datasets import numpy as np import tensorflow as tf import torch from transformers import AutoTokenizer # %% ds = datasets.load_dataset("cnn_dailymail", "1.0.0", split="train[:1%]") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") #%% for return_tensors in [None, "np", "pt", "tf", "jax"]: print(f"********** no map, return_tensors={return_tensors} **********") _ds = tokenizer(ds["article"], return_tensors=return_tensors, truncation=True, padding=True) print('Type <input_ids>:', type(_ds["input_ids"])) # %% for return_tensors in [None, "np", "pt", "tf", "jax"]: print(f"********** map, return_tensors={return_tensors} **********") _ds = ds.map( lambda examples: tokenizer(examples["article"], return_tensors=return_tensors, truncation=True, padding=True), batched=True, remove_columns=["article"], ) print('Type <input_ids>:', type(_ds[0]["input_ids"])) ``` ### Expected behavior The output from the script above. I would expect the second half to be the same. ``` ********** no map, return_tensors=None ********** Type <input_ids>: <class 'list'> ********** no map, return_tensors=np ********** Type <input_ids>: <class 'numpy.ndarray'> ********** no map, return_tensors=pt ********** Type <input_ids>: <class 'torch.Tensor'> ********** no map, return_tensors=tf ********** Type <input_ids>: <class 'tensorflow.python.framework.ops.EagerTensor'> ********** no map, return_tensors=jax ********** Type <input_ids>: <class 'jaxlib.xla_extension.ArrayImpl'> ********** map, return_tensors=None ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=np ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=pt ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=tf ********** Type <input_ids>: <class 'list'> ********** map, return_tensors=jax ********** Type <input_ids>: <class 'list'> ``` ### Environment info - `datasets` version: 2.17.1 - Platform: Redacted (linux) - Python version: 3.10.12 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/11166137?v=4", "events_url": "https://api.github.com/users/srossi93/events{/privacy}", "followers_url": "https://api.github.com/users/srossi93/followers", "following_url": "https://api.github.com/users/srossi93/following{/other_user}", "gists_url": "https://api.github.com/users/srossi93/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/srossi93", "id": 11166137, "login": "srossi93", "node_id": "MDQ6VXNlcjExMTY2MTM3", "organizations_url": "https://api.github.com/users/srossi93/orgs", "received_events_url": "https://api.github.com/users/srossi93/received_events", "repos_url": "https://api.github.com/users/srossi93/repos", "site_admin": false, "starred_url": "https://api.github.com/users/srossi93/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srossi93/subscriptions", "type": "User", "url": "https://api.github.com/users/srossi93" }
https://api.github.com/repos/huggingface/datasets/issues/6688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6688/timeline
open
false
6,688
null
null
null
false
2,148,554,178
https://api.github.com/repos/huggingface/datasets/issues/6687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6687/events
[]
null
2024-03-04T12:59:42Z
[]
https://github.com/huggingface/datasets/pull/6687
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6687). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Looking into the CI failure, this PR is incompatible with `huggingface-hub>=0.20.0`. It looks like there were several changes made to HfFileSystem in 0.20.0, @lhoestq any ideas on what the issue might be in particular?\r\n\r\na bisect indicates that it's related to https://github.com/huggingface/huggingface_hub/pull/1815", "It looks like huggingface-hub's `HfFileSystem.glob` is broken for exact string matches (that don't contain glob wildcards) when combining `huggingface-hub>=0.20.0` and `fsspec>=2023.12.0`.\r\n\r\nI did a quick test with huggingface-hub `main`, and adding this test case to `tests/test_hf_filesystem::HfFileSystemTests::test_glob` (https://github.com/huggingface/huggingface_hub/blob/main/tests/test_hf_file_system.py) passes with `fsspec==2023.10.0` and fails with `fsspec==2023.12.0`\r\n```python\r\n self.assertEqual(\r\n sorted(self.hffs.glob(self.hf_path + \"/.gitattributes\")),\r\n sorted([self.hf_path + \"/.gitattributes\"]),\r\n )\r\n\r\n```\r\n\r\nthe `hffs.glob()` call with a pattern that does not contain any wildcards returns an empty list:\r\n```\r\nE AssertionError: Lists differ: [] != ['datasets/__DUMMY_TRANSFORMERS_USER__/rep[35 chars]tes']\r\nE\r\nE Second list contains 1 additional elements.\r\nE First extra element 0:\r\nE 'datasets/__DUMMY_TRANSFORMERS_USER__/repo-7d0ae9-17091013467064/.gitattributes'\r\nE\r\nE - []\r\nE + ['datasets/__DUMMY_TRANSFORMERS_USER__/repo-7d0ae9-17091013467064/.gitattributes']\r\n```\r\n(and with the compatible/passing older fsspec versions the glob call returns the single exact file match as expected)\r\n\r\nSo it looks like the CI failure here isn't directly related to this PR. The failing patterns that don't contain any `*` wildcards are generated by `datasets` with or without this PR, but now that this PR installs the incompatible fsspec version, the underlying `HfFileSystem.glob()` call ends up failing.", "I just opened https://github.com/huggingface/huggingface_hub/pull/2056 to fix this.\r\n\r\nDo you mind if I continue this PR to run the CI against `huggingface_hub@main` until the fix is released ?\r\n\r\nEDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main`", "I just added two additional patterns to cover cases like `test-data/xxx.csv` and `data-test/xxx.csv`", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005461 / 0.011353 (-0.005892) | 0.003861 / 0.011008 (-0.007148) | 0.063252 / 0.038508 (0.024744) | 0.031474 / 0.023109 (0.008364) | 0.250321 / 0.275898 (-0.025577) | 0.275198 / 0.323480 (-0.048282) | 0.003275 / 0.007986 (-0.004710) | 0.002874 / 0.004328 (-0.001454) | 0.049499 / 0.004250 (0.045248) | 0.045334 / 0.037052 (0.008282) | 0.266347 / 0.258489 (0.007858) | 0.308974 / 0.293841 (0.015133) | 0.027742 / 0.128546 (-0.100804) | 0.010274 / 0.075646 (-0.065373) | 0.207516 / 0.419271 (-0.211755) | 0.036538 / 0.043533 (-0.006995) | 0.247949 / 0.255139 (-0.007190) | 0.268986 / 0.283200 (-0.014214) | 0.019842 / 0.141683 (-0.121841) | 1.117547 / 1.452155 (-0.334607) | 1.175813 / 1.492716 (-0.316903) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103661 / 0.018006 (0.085655) | 0.331023 / 0.000490 (0.330534) | 0.000240 / 0.000200 (0.000040) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019767 / 0.037411 (-0.017645) | 0.061500 / 0.014526 (0.046974) | 0.075899 / 0.176557 (-0.100658) | 0.122240 / 0.737135 (-0.614895) | 0.074621 / 0.296338 (-0.221717) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287501 / 0.215209 (0.072292) | 2.794737 / 2.077655 (0.717082) | 1.505362 / 1.504120 (0.001242) | 1.379481 / 1.541195 (-0.161713) | 1.394836 / 1.468490 (-0.073654) | 0.545803 / 4.584777 (-4.038974) | 2.364167 / 3.745712 (-1.381545) | 2.800923 / 5.269862 (-2.468939) | 1.723910 / 4.565676 (-2.841766) | 0.061270 / 0.424275 (-0.363005) | 0.005006 / 0.007607 (-0.002601) | 0.334952 / 0.226044 (0.108908) | 3.367122 / 2.268929 (1.098194) | 1.839822 / 55.444624 (-53.604803) | 1.553774 / 6.876477 (-5.322703) | 1.583585 / 2.142072 (-0.558488) | 0.624680 / 4.805227 (-4.180547) | 0.116364 / 6.500664 (-6.384300) | 0.042412 / 0.075469 (-0.033057) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975207 / 1.841788 (-0.866580) | 11.843126 / 8.074308 (3.768818) | 9.418537 / 10.191392 (-0.772855) | 0.130648 / 0.680424 (-0.549775) | 0.013747 / 0.534201 (-0.520454) | 0.288195 / 0.579283 (-0.291088) | 0.269861 / 0.434364 (-0.164503) | 0.326732 / 0.540337 (-0.213606) | 0.441256 / 1.386936 (-0.945680) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005185 / 0.011353 (-0.006168) | 0.003836 / 0.011008 (-0.007172) | 0.050057 / 0.038508 (0.011549) | 0.030929 / 0.023109 (0.007820) | 0.263558 / 0.275898 (-0.012340) | 0.284553 / 0.323480 (-0.038927) | 0.004331 / 0.007986 (-0.003655) | 0.002815 / 0.004328 (-0.001513) | 0.050187 / 0.004250 (0.045936) | 0.048431 / 0.037052 (0.011379) | 0.271005 / 0.258489 (0.012515) | 0.304749 / 0.293841 (0.010908) | 0.029286 / 0.128546 (-0.099260) | 0.010598 / 0.075646 (-0.065048) | 0.058111 / 0.419271 (-0.361160) | 0.053665 / 0.043533 (0.010132) | 0.257574 / 0.255139 (0.002436) | 0.285802 / 0.283200 (0.002602) | 0.018917 / 0.141683 (-0.122766) | 1.206517 / 1.452155 (-0.245638) | 1.220572 / 1.492716 (-0.272144) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.122466 / 0.018006 (0.104460) | 0.567887 / 0.000490 (0.567397) | 0.000321 / 0.000200 (0.000121) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022120 / 0.037411 (-0.015292) | 0.075456 / 0.014526 (0.060931) | 0.086385 / 0.176557 (-0.090171) | 0.126045 / 0.737135 (-0.611091) | 0.087502 / 0.296338 (-0.208837) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304847 / 0.215209 (0.089638) | 3.008095 / 2.077655 (0.930441) | 1.726178 / 1.504120 (0.222058) | 1.592332 / 1.541195 (0.051138) | 1.603714 / 1.468490 (0.135224) | 0.576875 / 4.584777 (-4.007902) | 2.450884 / 3.745712 (-1.294828) | 2.719073 / 5.269862 (-2.550789) | 1.775261 / 4.565676 (-2.790415) | 0.063144 / 0.424275 (-0.361131) | 0.005122 / 0.007607 (-0.002485) | 0.350004 / 0.226044 (0.123960) | 3.467146 / 2.268929 (1.198218) | 2.062907 / 55.444624 (-53.381717) | 1.798793 / 6.876477 (-5.077684) | 1.921204 / 2.142072 (-0.220868) | 0.651832 / 4.805227 (-4.153396) | 0.122326 / 6.500664 (-6.378338) | 0.041396 / 0.075469 (-0.034073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.024859 / 1.841788 (-0.816928) | 12.569744 / 8.074308 (4.495436) | 10.448487 / 10.191392 (0.257095) | 0.131529 / 0.680424 (-0.548895) | 0.014853 / 0.534201 (-0.519348) | 0.287683 / 0.579283 (-0.291600) | 0.289814 / 0.434364 (-0.144550) | 0.323935 / 0.540337 (-0.216403) | 0.425208 / 1.386936 (-0.961728) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba71e92c59c9bd9d1ee6168691977f0c4728ed6e \"CML watermark\")\n", "> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main`\r\n\r\nPlease note that people using `huggingface_hub` < 0.21.2 and latest `fsspec` will have issues when using `datasets`:\r\n- https://github.com/huggingface/lighteval/actions/runs/8139147047/job/22241658122?pr=86\r\n- https://github.com/huggingface/lighteval/pull/84\r\n\r\nCC: @clefourrier \r\n" ]
fsspec: support fsspec>=2023.12.0 glob changes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/6687/reactions" }
PR_kwDODunzps5nnqBB
{ "diff_url": "https://github.com/huggingface/datasets/pull/6687.diff", "html_url": "https://github.com/huggingface/datasets/pull/6687", "merged_at": "2024-02-29T15:12:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/6687.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6687" }
2024-02-22T08:59:32Z
https://api.github.com/repos/huggingface/datasets/issues/6687/comments
- adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound Should close #6644 Should close #6645 The `test_data_files` glob/pattern tests pass for me in: - `fsspec==2023.10.0` (the pinned max version in datasets `main`) - `fsspec==2023.12.0` (#6644) - `fsspec==2024.2.0` (#6645)
{ "avatar_url": "https://avatars.githubusercontent.com/u/651988?v=4", "events_url": "https://api.github.com/users/pmrowla/events{/privacy}", "followers_url": "https://api.github.com/users/pmrowla/followers", "following_url": "https://api.github.com/users/pmrowla/following{/other_user}", "gists_url": "https://api.github.com/users/pmrowla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pmrowla", "id": 651988, "login": "pmrowla", "node_id": "MDQ6VXNlcjY1MTk4OA==", "organizations_url": "https://api.github.com/users/pmrowla/orgs", "received_events_url": "https://api.github.com/users/pmrowla/received_events", "repos_url": "https://api.github.com/users/pmrowla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pmrowla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pmrowla/subscriptions", "type": "User", "url": "https://api.github.com/users/pmrowla" }
https://api.github.com/repos/huggingface/datasets/issues/6687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6687/timeline
closed
false
6,687
null
2024-02-29T15:12:17Z
null
true
2,147,795,103
https://api.github.com/repos/huggingface/datasets/issues/6686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6686/events
[]
null
2024-05-02T03:44:59Z
[]
https://github.com/huggingface/datasets/issues/6686
NONE
null
null
null
[ "```\r\nimport pandas as pd\r\nfrom datasets import Dataset, Image\r\n\r\n# Read the CSV file\r\ndata = pd.read_csv(\"XXXX.csv\")\r\n\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_pandas(data)\r\ndataset = dataset.cast_column(\"file_name\", Image())\r\n\r\n# Upload to Hugging Face Hub (make sure authentication is set up)\r\ndataset.push_to_hub(\"XXXXX\"\")\r\n```\r\n\r\nstuck in \"Casting the dataset\r\n![截屏2024-05-02 11 44 50](https://github.com/huggingface/datasets/assets/48406770/dc012dc5-16f6-4fd5-9e02-1b705c552c5b)\r\n\"\r\n" ]
Question: Is there any way for uploading a large image dataset?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions" }
I_kwDODunzps6ABMCf
null
2024-02-21T22:07:21Z
https://api.github.com/repos/huggingface/datasets/issues/6686/comments
I am uploading an image dataset like this: ``` dataset = load_dataset( "json", data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"}, ) dataset = dataset.cast_column("images", Sequence(Image())) dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB") ``` where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it. Thanks in advance! Best,
{ "avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4", "events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}", "followers_url": "https://api.github.com/users/zhjohnchan/followers", "following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}", "gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhjohnchan", "id": 37367987, "login": "zhjohnchan", "node_id": "MDQ6VXNlcjM3MzY3OTg3", "organizations_url": "https://api.github.com/users/zhjohnchan/orgs", "received_events_url": "https://api.github.com/users/zhjohnchan/received_events", "repos_url": "https://api.github.com/users/zhjohnchan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions", "type": "User", "url": "https://api.github.com/users/zhjohnchan" }
https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6686/timeline
open
false
6,686
null
null
null
false
2,145,570,006
https://api.github.com/repos/huggingface/datasets/issues/6685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6685/events
[]
null
2024-03-12T21:31:04Z
[]
https://github.com/huggingface/datasets/pull/6685
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6685). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005386 / 0.011353 (-0.005967) | 0.003707 / 0.011008 (-0.007301) | 0.062661 / 0.038508 (0.024153) | 0.029058 / 0.023109 (0.005949) | 0.249669 / 0.275898 (-0.026230) | 0.280996 / 0.323480 (-0.042484) | 0.004041 / 0.007986 (-0.003945) | 0.002713 / 0.004328 (-0.001616) | 0.047914 / 0.004250 (0.043664) | 0.042014 / 0.037052 (0.004961) | 0.265209 / 0.258489 (0.006720) | 0.297320 / 0.293841 (0.003479) | 0.028323 / 0.128546 (-0.100223) | 0.010844 / 0.075646 (-0.064802) | 0.205895 / 0.419271 (-0.213377) | 0.035997 / 0.043533 (-0.007536) | 0.245069 / 0.255139 (-0.010070) | 0.266159 / 0.283200 (-0.017040) | 0.017590 / 0.141683 (-0.124093) | 1.132046 / 1.452155 (-0.320109) | 1.177496 / 1.492716 (-0.315220) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.105441 / 0.018006 (0.087435) | 0.301321 / 0.000490 (0.300831) | 0.000211 / 0.000200 (0.000011) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018687 / 0.037411 (-0.018724) | 0.061221 / 0.014526 (0.046695) | 0.072556 / 0.176557 (-0.104001) | 0.119641 / 0.737135 (-0.617495) | 0.073781 / 0.296338 (-0.222557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284564 / 0.215209 (0.069354) | 2.795786 / 2.077655 (0.718131) | 1.437059 / 1.504120 (-0.067061) | 1.309319 / 1.541195 (-0.231876) | 1.315849 / 1.468490 (-0.152641) | 0.578571 / 4.584777 (-4.006206) | 2.350754 / 3.745712 (-1.394958) | 2.758499 / 5.269862 (-2.511362) | 1.705545 / 4.565676 (-2.860131) | 0.063660 / 0.424275 (-0.360615) | 0.005506 / 0.007607 (-0.002101) | 0.334915 / 0.226044 (0.108871) | 3.295922 / 2.268929 (1.026994) | 1.796513 / 55.444624 (-53.648111) | 1.488113 / 6.876477 (-5.388364) | 1.523042 / 2.142072 (-0.619031) | 0.648169 / 4.805227 (-4.157058) | 0.119321 / 6.500664 (-6.381343) | 0.041932 / 0.075469 (-0.033537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982432 / 1.841788 (-0.859356) | 11.344780 / 8.074308 (3.270472) | 9.627219 / 10.191392 (-0.564173) | 0.142590 / 0.680424 (-0.537834) | 0.013899 / 0.534201 (-0.520302) | 0.286335 / 0.579283 (-0.292948) | 0.266552 / 0.434364 (-0.167812) | 0.320361 / 0.540337 (-0.219977) | 0.420303 / 1.386936 (-0.966633) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005251 / 0.011353 (-0.006102) | 0.003515 / 0.011008 (-0.007494) | 0.049344 / 0.038508 (0.010836) | 0.032055 / 0.023109 (0.008945) | 0.280653 / 0.275898 (0.004755) | 0.303989 / 0.323480 (-0.019491) | 0.004402 / 0.007986 (-0.003584) | 0.002758 / 0.004328 (-0.001570) | 0.050947 / 0.004250 (0.046697) | 0.044405 / 0.037052 (0.007353) | 0.292856 / 0.258489 (0.034367) | 0.325307 / 0.293841 (0.031466) | 0.047720 / 0.128546 (-0.080827) | 0.010589 / 0.075646 (-0.065057) | 0.057728 / 0.419271 (-0.361543) | 0.033842 / 0.043533 (-0.009691) | 0.285443 / 0.255139 (0.030304) | 0.300013 / 0.283200 (0.016814) | 0.017444 / 0.141683 (-0.124238) | 1.152880 / 1.452155 (-0.299275) | 1.200670 / 1.492716 (-0.292046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092355 / 0.018006 (0.074349) | 0.307907 / 0.000490 (0.307418) | 0.000226 / 0.000200 (0.000026) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021624 / 0.037411 (-0.015787) | 0.075855 / 0.014526 (0.061329) | 0.087109 / 0.176557 (-0.089447) | 0.124859 / 0.737135 (-0.612276) | 0.088933 / 0.296338 (-0.207406) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294213 / 0.215209 (0.079004) | 2.893146 / 2.077655 (0.815491) | 1.595061 / 1.504120 (0.090942) | 1.480959 / 1.541195 (-0.060236) | 1.528277 / 1.468490 (0.059787) | 0.570273 / 4.584777 (-4.014504) | 2.412948 / 3.745712 (-1.332764) | 2.675009 / 5.269862 (-2.594852) | 1.724005 / 4.565676 (-2.841671) | 0.063359 / 0.424275 (-0.360916) | 0.005008 / 0.007607 (-0.002599) | 0.346570 / 0.226044 (0.120526) | 3.456566 / 2.268929 (1.187637) | 1.973109 / 55.444624 (-53.471515) | 1.657562 / 6.876477 (-5.218915) | 1.790086 / 2.142072 (-0.351986) | 0.655277 / 4.805227 (-4.149950) | 0.117985 / 6.500664 (-6.382679) | 0.041128 / 0.075469 (-0.034342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001428 / 1.841788 (-0.840360) | 11.953458 / 8.074308 (3.879150) | 10.188439 / 10.191392 (-0.002953) | 0.140863 / 0.680424 (-0.539561) | 0.015278 / 0.534201 (-0.518923) | 0.288193 / 0.579283 (-0.291090) | 0.281732 / 0.434364 (-0.152632) | 0.328034 / 0.540337 (-0.212304) | 0.414571 / 1.386936 (-0.972365) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#531f35e688f81ec6b4c9044856a89a6b48142bd8 \"CML watermark\")\n" ]
Updated Quickstart Notebook link
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6685/reactions" }
PR_kwDODunzps5ndZQa
{ "diff_url": "https://github.com/huggingface/datasets/pull/6685.diff", "html_url": "https://github.com/huggingface/datasets/pull/6685", "merged_at": "2024-02-25T18:48:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/6685.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6685" }
2024-02-21T01:04:18Z
https://api.github.com/repos/huggingface/datasets/issues/6685/comments
Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb)
{ "avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4", "events_url": "https://api.github.com/users/Codeblockz/events{/privacy}", "followers_url": "https://api.github.com/users/Codeblockz/followers", "following_url": "https://api.github.com/users/Codeblockz/following{/other_user}", "gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Codeblockz", "id": 55932554, "login": "Codeblockz", "node_id": "MDQ6VXNlcjU1OTMyNTU0", "organizations_url": "https://api.github.com/users/Codeblockz/orgs", "received_events_url": "https://api.github.com/users/Codeblockz/received_events", "repos_url": "https://api.github.com/users/Codeblockz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions", "type": "User", "url": "https://api.github.com/users/Codeblockz" }
https://api.github.com/repos/huggingface/datasets/issues/6685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6685/timeline
closed
false
6,685
null
2024-02-25T18:48:08Z
null
true
2,144,092,388
https://api.github.com/repos/huggingface/datasets/issues/6684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6684/events
[]
null
2024-02-20T15:40:52Z
[]
https://github.com/huggingface/datasets/pull/6684
MEMBER
null
false
null
[ "Thank you ! Should we also add the link to the dataset page ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thank you ! Should we also add the link to the dataset page ?\r\n\r\nGood idea! Done in https://github.com/huggingface/datasets/pull/6684/commits/4ab55210dca1815b6c2f23901598bfb29fc92a47", "Looks like a test is failing: `test_load_dataset_cached_local_script `.\r\n\r\nApparently your new message is also shown for datasets that don't exist, which is maybe not ideal", "Ah let me take a look!", "> Looks like a test is failing: `test_load_dataset_cached_local_script `.\r\n> \r\n> Apparently your new message is also shown for datasets that don't exist, which is maybe not ideal\r\n\r\nFixed by reverting the error message root + added a small clarifying part", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005634 / 0.011353 (-0.005719) | 0.003786 / 0.011008 (-0.007222) | 0.064245 / 0.038508 (0.025737) | 0.031228 / 0.023109 (0.008119) | 0.248162 / 0.275898 (-0.027736) | 0.273454 / 0.323480 (-0.050026) | 0.003176 / 0.007986 (-0.004809) | 0.002814 / 0.004328 (-0.001515) | 0.049234 / 0.004250 (0.044984) | 0.046075 / 0.037052 (0.009023) | 0.262410 / 0.258489 (0.003921) | 0.290597 / 0.293841 (-0.003244) | 0.028545 / 0.128546 (-0.100001) | 0.010881 / 0.075646 (-0.064766) | 0.212098 / 0.419271 (-0.207173) | 0.036406 / 0.043533 (-0.007127) | 0.244571 / 0.255139 (-0.010568) | 0.269537 / 0.283200 (-0.013663) | 0.019574 / 0.141683 (-0.122109) | 1.120369 / 1.452155 (-0.331785) | 1.170188 / 1.492716 (-0.322529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.108088 / 0.018006 (0.090082) | 0.299836 / 0.000490 (0.299346) | 0.000204 / 0.000200 (0.000004) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020881 / 0.037411 (-0.016531) | 0.065290 / 0.014526 (0.050764) | 0.074283 / 0.176557 (-0.102274) | 0.122189 / 0.737135 (-0.614947) | 0.077772 / 0.296338 (-0.218566) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278329 / 0.215209 (0.063120) | 2.709885 / 2.077655 (0.632230) | 1.428824 / 1.504120 (-0.075296) | 1.314338 / 1.541195 (-0.226857) | 1.349445 / 1.468490 (-0.119045) | 0.571863 / 4.584777 (-4.012914) | 2.358306 / 3.745712 (-1.387407) | 2.873498 / 5.269862 (-2.396364) | 1.779897 / 4.565676 (-2.785779) | 0.062828 / 0.424275 (-0.361447) | 0.005416 / 0.007607 (-0.002191) | 0.337645 / 0.226044 (0.111601) | 3.328868 / 2.268929 (1.059940) | 1.793387 / 55.444624 (-53.651238) | 1.539201 / 6.876477 (-5.337276) | 1.589552 / 2.142072 (-0.552520) | 0.645454 / 4.805227 (-4.159773) | 0.116966 / 6.500664 (-6.383698) | 0.043339 / 0.075469 (-0.032130) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.995743 / 1.841788 (-0.846045) | 12.096551 / 8.074308 (4.022243) | 10.214299 / 10.191392 (0.022907) | 0.133025 / 0.680424 (-0.547399) | 0.014393 / 0.534201 (-0.519808) | 0.289018 / 0.579283 (-0.290266) | 0.267879 / 0.434364 (-0.166485) | 0.324362 / 0.540337 (-0.215976) | 0.425596 / 1.386936 (-0.961340) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005739 / 0.011353 (-0.005614) | 0.003992 / 0.011008 (-0.007017) | 0.051362 / 0.038508 (0.012854) | 0.031707 / 0.023109 (0.008598) | 0.274807 / 0.275898 (-0.001091) | 0.298897 / 0.323480 (-0.024583) | 0.004363 / 0.007986 (-0.003622) | 0.002862 / 0.004328 (-0.001466) | 0.050462 / 0.004250 (0.046212) | 0.048158 / 0.037052 (0.011106) | 0.282759 / 0.258489 (0.024270) | 0.317766 / 0.293841 (0.023926) | 0.060245 / 0.128546 (-0.068301) | 0.011279 / 0.075646 (-0.064367) | 0.061175 / 0.419271 (-0.358097) | 0.035876 / 0.043533 (-0.007656) | 0.273963 / 0.255139 (0.018824) | 0.288788 / 0.283200 (0.005589) | 0.019690 / 0.141683 (-0.121992) | 1.167074 / 1.452155 (-0.285080) | 1.206344 / 1.492716 (-0.286372) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091211 / 0.018006 (0.073205) | 0.299295 / 0.000490 (0.298805) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022718 / 0.037411 (-0.014693) | 0.079483 / 0.014526 (0.064957) | 0.087437 / 0.176557 (-0.089120) | 0.126977 / 0.737135 (-0.610159) | 0.089678 / 0.296338 (-0.206660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294719 / 0.215209 (0.079510) | 2.864505 / 2.077655 (0.786851) | 1.583993 / 1.504120 (0.079873) | 1.455079 / 1.541195 (-0.086115) | 1.504080 / 1.468490 (0.035590) | 0.569040 / 4.584777 (-4.015737) | 2.423472 / 3.745712 (-1.322240) | 2.742848 / 5.269862 (-2.527014) | 1.785244 / 4.565676 (-2.780432) | 0.062655 / 0.424275 (-0.361620) | 0.005027 / 0.007607 (-0.002580) | 0.343863 / 0.226044 (0.117818) | 3.376286 / 2.268929 (1.107358) | 1.933846 / 55.444624 (-53.510779) | 1.667316 / 6.876477 (-5.209161) | 1.815621 / 2.142072 (-0.326451) | 0.639378 / 4.805227 (-4.165850) | 0.116514 / 6.500664 (-6.384150) | 0.042191 / 0.075469 (-0.033279) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007103 / 1.841788 (-0.834685) | 12.791193 / 8.074308 (4.716885) | 10.870575 / 10.191392 (0.679183) | 0.131040 / 0.680424 (-0.549384) | 0.016510 / 0.534201 (-0.517691) | 0.288372 / 0.579283 (-0.290911) | 0.275574 / 0.434364 (-0.158790) | 0.327801 / 0.540337 (-0.212536) | 0.415942 / 1.386936 (-0.970994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b775390900132834e5edf487f5cbbf1299af1d88 \"CML watermark\")\n" ]
Improve error message for gated datasets on load
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6684/reactions" }
PR_kwDODunzps5nYUIf
{ "diff_url": "https://github.com/huggingface/datasets/pull/6684.diff", "html_url": "https://github.com/huggingface/datasets/pull/6684", "merged_at": "2024-02-20T15:33:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/6684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6684" }
2024-02-20T10:51:27Z
https://api.github.com/repos/huggingface/datasets/issues/6684/comments
Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/6684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6684/timeline
closed
false
6,684
null
2024-02-20T15:33:56Z
null
true
2,142,751,955
https://api.github.com/repos/huggingface/datasets/issues/6683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6683/events
[]
null
2024-02-19T17:24:25Z
[]
https://github.com/huggingface/datasets/pull/6683
COLLABORATOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005501 / 0.011353 (-0.005851) | 0.003907 / 0.011008 (-0.007101) | 0.063524 / 0.038508 (0.025016) | 0.031773 / 0.023109 (0.008664) | 0.244672 / 0.275898 (-0.031226) | 0.293342 / 0.323480 (-0.030138) | 0.004091 / 0.007986 (-0.003895) | 0.002837 / 0.004328 (-0.001491) | 0.049181 / 0.004250 (0.044930) | 0.044515 / 0.037052 (0.007462) | 0.263932 / 0.258489 (0.005443) | 0.288412 / 0.293841 (-0.005429) | 0.028338 / 0.128546 (-0.100208) | 0.010865 / 0.075646 (-0.064781) | 0.207979 / 0.419271 (-0.211293) | 0.036149 / 0.043533 (-0.007384) | 0.250674 / 0.255139 (-0.004465) | 0.263232 / 0.283200 (-0.019968) | 0.017919 / 0.141683 (-0.123763) | 1.127794 / 1.452155 (-0.324360) | 1.172071 / 1.492716 (-0.320645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090435 / 0.018006 (0.072429) | 0.300041 / 0.000490 (0.299552) | 0.000217 / 0.000200 (0.000018) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018986 / 0.037411 (-0.018426) | 0.064872 / 0.014526 (0.050346) | 0.074738 / 0.176557 (-0.101818) | 0.121577 / 0.737135 (-0.615558) | 0.076416 / 0.296338 (-0.219923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279471 / 0.215209 (0.064262) | 2.743066 / 2.077655 (0.665411) | 1.429511 / 1.504120 (-0.074609) | 1.315391 / 1.541195 (-0.225804) | 1.371255 / 1.468490 (-0.097235) | 0.570708 / 4.584777 (-4.014069) | 2.373047 / 3.745712 (-1.372666) | 2.813198 / 5.269862 (-2.456663) | 1.768928 / 4.565676 (-2.796749) | 0.066031 / 0.424275 (-0.358244) | 0.005074 / 0.007607 (-0.002533) | 0.333484 / 0.226044 (0.107440) | 3.295002 / 2.268929 (1.026074) | 1.796089 / 55.444624 (-53.648535) | 1.521849 / 6.876477 (-5.354627) | 1.604417 / 2.142072 (-0.537655) | 0.645235 / 4.805227 (-4.159992) | 0.119226 / 6.500664 (-6.381439) | 0.043275 / 0.075469 (-0.032194) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.986350 / 1.841788 (-0.855438) | 11.921886 / 8.074308 (3.847578) | 9.878841 / 10.191392 (-0.312551) | 0.141072 / 0.680424 (-0.539352) | 0.014514 / 0.534201 (-0.519687) | 0.304060 / 0.579283 (-0.275223) | 0.267844 / 0.434364 (-0.166520) | 0.324881 / 0.540337 (-0.215457) | 0.421426 / 1.386936 (-0.965510) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005322 / 0.011353 (-0.006030) | 0.003942 / 0.011008 (-0.007066) | 0.050629 / 0.038508 (0.012121) | 0.031176 / 0.023109 (0.008066) | 0.279627 / 0.275898 (0.003729) | 0.302667 / 0.323480 (-0.020813) | 0.004281 / 0.007986 (-0.003705) | 0.002900 / 0.004328 (-0.001428) | 0.048168 / 0.004250 (0.043918) | 0.046094 / 0.037052 (0.009042) | 0.290714 / 0.258489 (0.032224) | 0.321336 / 0.293841 (0.027496) | 0.047934 / 0.128546 (-0.080612) | 0.010773 / 0.075646 (-0.064873) | 0.059439 / 0.419271 (-0.359832) | 0.033644 / 0.043533 (-0.009889) | 0.273710 / 0.255139 (0.018571) | 0.295144 / 0.283200 (0.011944) | 0.018115 / 0.141683 (-0.123568) | 1.150302 / 1.452155 (-0.301853) | 1.197304 / 1.492716 (-0.295412) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090262 / 0.018006 (0.072255) | 0.300727 / 0.000490 (0.300238) | 0.000228 / 0.000200 (0.000028) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022706 / 0.037411 (-0.014706) | 0.077420 / 0.014526 (0.062894) | 0.089119 / 0.176557 (-0.087437) | 0.126760 / 0.737135 (-0.610375) | 0.090702 / 0.296338 (-0.205637) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296558 / 0.215209 (0.081349) | 2.865311 / 2.077655 (0.787656) | 1.587355 / 1.504120 (0.083235) | 1.491660 / 1.541195 (-0.049534) | 1.513604 / 1.468490 (0.045114) | 0.565209 / 4.584777 (-4.019568) | 2.450648 / 3.745712 (-1.295064) | 2.709941 / 5.269862 (-2.559921) | 1.775032 / 4.565676 (-2.790645) | 0.063767 / 0.424275 (-0.360508) | 0.005047 / 0.007607 (-0.002560) | 0.347406 / 0.226044 (0.121361) | 3.416671 / 2.268929 (1.147743) | 1.949653 / 55.444624 (-53.494971) | 1.669885 / 6.876477 (-5.206592) | 1.848125 / 2.142072 (-0.293947) | 0.648179 / 4.805227 (-4.157048) | 0.116374 / 6.500664 (-6.384290) | 0.041816 / 0.075469 (-0.033653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.007009 / 1.841788 (-0.834779) | 12.749964 / 8.074308 (4.675656) | 10.765890 / 10.191392 (0.574498) | 0.141743 / 0.680424 (-0.538681) | 0.016077 / 0.534201 (-0.518124) | 0.293275 / 0.579283 (-0.286008) | 0.277064 / 0.434364 (-0.157300) | 0.327039 / 0.540337 (-0.213299) | 0.421784 / 1.386936 (-0.965152) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f807cd4c733a3616011a3f7f53a9fa56f7d5f685 \"CML watermark\")\n" ]
Fix imagefolder dataset url
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6683/reactions" }
PR_kwDODunzps5nTxGu
{ "diff_url": "https://github.com/huggingface/datasets/pull/6683.diff", "html_url": "https://github.com/huggingface/datasets/pull/6683", "merged_at": "2024-02-19T17:18:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/6683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6683" }
2024-02-19T16:26:51Z
https://api.github.com/repos/huggingface/datasets/issues/6683/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6683/timeline
closed
false
6,683
null
2024-02-19T17:18:10Z
null
true
2,142,000,800
https://api.github.com/repos/huggingface/datasets/issues/6682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6682/events
[]
null
2024-02-28T07:02:40Z
[]
https://github.com/huggingface/datasets/pull/6682
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6682). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005292 / 0.011353 (-0.006060) | 0.003354 / 0.011008 (-0.007654) | 0.063150 / 0.038508 (0.024642) | 0.028616 / 0.023109 (0.005507) | 0.242267 / 0.275898 (-0.033631) | 0.267305 / 0.323480 (-0.056175) | 0.003041 / 0.007986 (-0.004944) | 0.003346 / 0.004328 (-0.000982) | 0.048268 / 0.004250 (0.044018) | 0.042070 / 0.037052 (0.005018) | 0.256526 / 0.258489 (-0.001963) | 0.279744 / 0.293841 (-0.014097) | 0.027862 / 0.128546 (-0.100684) | 0.010786 / 0.075646 (-0.064861) | 0.206998 / 0.419271 (-0.212273) | 0.035503 / 0.043533 (-0.008030) | 0.248454 / 0.255139 (-0.006685) | 0.265639 / 0.283200 (-0.017561) | 0.019590 / 0.141683 (-0.122093) | 1.134445 / 1.452155 (-0.317709) | 1.194956 / 1.492716 (-0.297761) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090987 / 0.018006 (0.072981) | 0.301907 / 0.000490 (0.301418) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018324 / 0.037411 (-0.019088) | 0.061492 / 0.014526 (0.046966) | 0.074166 / 0.176557 (-0.102391) | 0.119990 / 0.737135 (-0.617145) | 0.074554 / 0.296338 (-0.221785) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279646 / 0.215209 (0.064437) | 2.773819 / 2.077655 (0.696164) | 1.436460 / 1.504120 (-0.067660) | 1.310303 / 1.541195 (-0.230892) | 1.315328 / 1.468490 (-0.153162) | 0.558328 / 4.584777 (-4.026449) | 2.383819 / 3.745712 (-1.361893) | 2.735034 / 5.269862 (-2.534827) | 1.724413 / 4.565676 (-2.841263) | 0.061476 / 0.424275 (-0.362799) | 0.004899 / 0.007607 (-0.002708) | 0.333195 / 0.226044 (0.107151) | 3.228900 / 2.268929 (0.959971) | 1.787315 / 55.444624 (-53.657309) | 1.526949 / 6.876477 (-5.349527) | 1.539816 / 2.142072 (-0.602257) | 0.636926 / 4.805227 (-4.168302) | 0.117533 / 6.500664 (-6.383131) | 0.041859 / 0.075469 (-0.033610) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.964637 / 1.841788 (-0.877151) | 11.296021 / 8.074308 (3.221713) | 9.375436 / 10.191392 (-0.815956) | 0.140330 / 0.680424 (-0.540094) | 0.013638 / 0.534201 (-0.520563) | 0.287046 / 0.579283 (-0.292237) | 0.265054 / 0.434364 (-0.169310) | 0.331548 / 0.540337 (-0.208790) | 0.438418 / 1.386936 (-0.948518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005284 / 0.011353 (-0.006069) | 0.003853 / 0.011008 (-0.007155) | 0.049301 / 0.038508 (0.010793) | 0.030477 / 0.023109 (0.007368) | 0.278507 / 0.275898 (0.002609) | 0.298245 / 0.323480 (-0.025235) | 0.004225 / 0.007986 (-0.003761) | 0.002736 / 0.004328 (-0.001593) | 0.049345 / 0.004250 (0.045094) | 0.045141 / 0.037052 (0.008088) | 0.290992 / 0.258489 (0.032503) | 0.317430 / 0.293841 (0.023589) | 0.029623 / 0.128546 (-0.098924) | 0.010351 / 0.075646 (-0.065295) | 0.058027 / 0.419271 (-0.361244) | 0.051306 / 0.043533 (0.007773) | 0.279947 / 0.255139 (0.024808) | 0.296916 / 0.283200 (0.013717) | 0.018859 / 0.141683 (-0.122823) | 1.153484 / 1.452155 (-0.298670) | 1.189141 / 1.492716 (-0.303575) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091030 / 0.018006 (0.073024) | 0.301305 / 0.000490 (0.300815) | 0.000230 / 0.000200 (0.000030) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021801 / 0.037411 (-0.015611) | 0.075162 / 0.014526 (0.060636) | 0.086455 / 0.176557 (-0.090102) | 0.125431 / 0.737135 (-0.611705) | 0.087797 / 0.296338 (-0.208542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295950 / 0.215209 (0.080741) | 2.895839 / 2.077655 (0.818184) | 1.603121 / 1.504120 (0.099001) | 1.482162 / 1.541195 (-0.059033) | 1.474231 / 1.468490 (0.005741) | 0.571370 / 4.584777 (-4.013407) | 2.466864 / 3.745712 (-1.278848) | 2.607279 / 5.269862 (-2.662582) | 1.723106 / 4.565676 (-2.842571) | 0.062068 / 0.424275 (-0.362208) | 0.004958 / 0.007607 (-0.002649) | 0.345213 / 0.226044 (0.119168) | 3.403916 / 2.268929 (1.134987) | 1.935538 / 55.444624 (-53.509086) | 1.658930 / 6.876477 (-5.217547) | 1.767611 / 2.142072 (-0.374461) | 0.645780 / 4.805227 (-4.159447) | 0.116077 / 6.500664 (-6.384587) | 0.040774 / 0.075469 (-0.034695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025952 / 1.841788 (-0.815836) | 11.935970 / 8.074308 (3.861662) | 9.935799 / 10.191392 (-0.255593) | 0.131081 / 0.680424 (-0.549343) | 0.016010 / 0.534201 (-0.518191) | 0.285476 / 0.579283 (-0.293807) | 0.274928 / 0.434364 (-0.159435) | 0.325788 / 0.540337 (-0.214550) | 0.421666 / 1.386936 (-0.965270) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7a79064b7a2255c0d6950dc998509ecefb893689 \"CML watermark\")\n" ]
Update GitHub Actions to Node 20
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6682/reactions" }
PR_kwDODunzps5nRME6
{ "diff_url": "https://github.com/huggingface/datasets/pull/6682.diff", "html_url": "https://github.com/huggingface/datasets/pull/6682", "merged_at": "2024-02-28T06:56:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6682.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6682" }
2024-02-19T10:10:50Z
https://api.github.com/repos/huggingface/datasets/issues/6682/comments
Update GitHub Actions to Node 20. Fix #6679.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6682/timeline
closed
false
6,682
null
2024-02-28T06:56:34Z
null
true
2,141,985,239
https://api.github.com/repos/huggingface/datasets/issues/6681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6681/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-02-28T07:23:49Z
[]
https://github.com/huggingface/datasets/pull/6681
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6681). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005410 / 0.011353 (-0.005943) | 0.003862 / 0.011008 (-0.007146) | 0.063457 / 0.038508 (0.024949) | 0.030081 / 0.023109 (0.006972) | 0.250657 / 0.275898 (-0.025241) | 0.275483 / 0.323480 (-0.047997) | 0.004048 / 0.007986 (-0.003938) | 0.002818 / 0.004328 (-0.001511) | 0.048940 / 0.004250 (0.044689) | 0.043397 / 0.037052 (0.006345) | 0.262160 / 0.258489 (0.003671) | 0.294154 / 0.293841 (0.000313) | 0.030028 / 0.128546 (-0.098519) | 0.010789 / 0.075646 (-0.064857) | 0.209665 / 0.419271 (-0.209606) | 0.035297 / 0.043533 (-0.008236) | 0.253169 / 0.255139 (-0.001970) | 0.271775 / 0.283200 (-0.011424) | 0.018332 / 0.141683 (-0.123351) | 1.152420 / 1.452155 (-0.299735) | 1.262767 / 1.492716 (-0.229949) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089990 / 0.018006 (0.071984) | 0.298552 / 0.000490 (0.298062) | 0.000217 / 0.000200 (0.000017) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018414 / 0.037411 (-0.018997) | 0.061566 / 0.014526 (0.047040) | 0.075360 / 0.176557 (-0.101196) | 0.123470 / 0.737135 (-0.613665) | 0.075215 / 0.296338 (-0.221124) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279563 / 0.215209 (0.064354) | 2.725212 / 2.077655 (0.647557) | 1.446413 / 1.504120 (-0.057707) | 1.321665 / 1.541195 (-0.219529) | 1.352475 / 1.468490 (-0.116015) | 0.568440 / 4.584777 (-4.016337) | 2.393217 / 3.745712 (-1.352495) | 2.793150 / 5.269862 (-2.476711) | 1.764316 / 4.565676 (-2.801360) | 0.063157 / 0.424275 (-0.361118) | 0.005117 / 0.007607 (-0.002491) | 0.333310 / 0.226044 (0.107265) | 3.291000 / 2.268929 (1.022071) | 1.824654 / 55.444624 (-53.619971) | 1.558681 / 6.876477 (-5.317795) | 1.580558 / 2.142072 (-0.561514) | 0.649831 / 4.805227 (-4.155396) | 0.118674 / 6.500664 (-6.381990) | 0.042247 / 0.075469 (-0.033222) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976552 / 1.841788 (-0.865236) | 11.847361 / 8.074308 (3.773053) | 9.490786 / 10.191392 (-0.700606) | 0.141643 / 0.680424 (-0.538781) | 0.013653 / 0.534201 (-0.520548) | 0.284345 / 0.579283 (-0.294938) | 0.268314 / 0.434364 (-0.166050) | 0.339586 / 0.540337 (-0.200751) | 0.445239 / 1.386936 (-0.941697) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005754 / 0.011353 (-0.005599) | 0.004038 / 0.011008 (-0.006970) | 0.050027 / 0.038508 (0.011519) | 0.033598 / 0.023109 (0.010488) | 0.286514 / 0.275898 (0.010616) | 0.302493 / 0.323480 (-0.020986) | 0.004254 / 0.007986 (-0.003731) | 0.002827 / 0.004328 (-0.001502) | 0.050433 / 0.004250 (0.046182) | 0.046106 / 0.037052 (0.009054) | 0.301522 / 0.258489 (0.043033) | 0.325784 / 0.293841 (0.031943) | 0.030014 / 0.128546 (-0.098532) | 0.010891 / 0.075646 (-0.064756) | 0.059899 / 0.419271 (-0.359373) | 0.057252 / 0.043533 (0.013719) | 0.280276 / 0.255139 (0.025137) | 0.295632 / 0.283200 (0.012433) | 0.019060 / 0.141683 (-0.122622) | 1.141423 / 1.452155 (-0.310731) | 1.226960 / 1.492716 (-0.265757) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091919 / 0.018006 (0.073913) | 0.300769 / 0.000490 (0.300279) | 0.000220 / 0.000200 (0.000020) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022467 / 0.037411 (-0.014945) | 0.075342 / 0.014526 (0.060816) | 0.087988 / 0.176557 (-0.088569) | 0.128304 / 0.737135 (-0.608831) | 0.089058 / 0.296338 (-0.207280) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294662 / 0.215209 (0.079453) | 2.887743 / 2.077655 (0.810088) | 1.591756 / 1.504120 (0.087636) | 1.469249 / 1.541195 (-0.071945) | 1.495639 / 1.468490 (0.027149) | 0.575507 / 4.584777 (-4.009270) | 2.449674 / 3.745712 (-1.296038) | 2.737217 / 5.269862 (-2.532645) | 1.783066 / 4.565676 (-2.782610) | 0.063388 / 0.424275 (-0.360887) | 0.005044 / 0.007607 (-0.002563) | 0.344807 / 0.226044 (0.118763) | 3.410845 / 2.268929 (1.141916) | 1.967452 / 55.444624 (-53.477173) | 1.699884 / 6.876477 (-5.176593) | 1.862466 / 2.142072 (-0.279607) | 0.663714 / 4.805227 (-4.141513) | 0.118356 / 6.500664 (-6.382308) | 0.041176 / 0.075469 (-0.034293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.013523 / 1.841788 (-0.828264) | 12.498866 / 8.074308 (4.424558) | 10.382595 / 10.191392 (0.191203) | 0.141757 / 0.680424 (-0.538667) | 0.015992 / 0.534201 (-0.518209) | 0.295639 / 0.579283 (-0.283644) | 0.278382 / 0.434364 (-0.155982) | 0.330351 / 0.540337 (-0.209986) | 0.431293 / 1.386936 (-0.955643) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf1aaa32eddd73076cf6600125661df4a32cb20a \"CML watermark\")\n" ]
Update release instructions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6681/reactions" }
PR_kwDODunzps5nRItQ
{ "diff_url": "https://github.com/huggingface/datasets/pull/6681.diff", "html_url": "https://github.com/huggingface/datasets/pull/6681", "merged_at": "2024-02-28T07:17:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/6681.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6681" }
2024-02-19T10:03:08Z
https://api.github.com/repos/huggingface/datasets/issues/6681/comments
Update release instructions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6681/timeline
closed
false
6,681
null
2024-02-28T07:17:22Z
null
true
2,141,979,527
https://api.github.com/repos/huggingface/datasets/issues/6680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6680/events
[]
null
2024-02-19T10:06:43Z
[]
https://github.com/huggingface/datasets/pull/6680
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6680). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004981 / 0.011353 (-0.006372) | 0.003030 / 0.011008 (-0.007978) | 0.059862 / 0.038508 (0.021354) | 0.030595 / 0.023109 (0.007486) | 0.262638 / 0.275898 (-0.013260) | 0.276287 / 0.323480 (-0.047193) | 0.003955 / 0.007986 (-0.004030) | 0.002667 / 0.004328 (-0.001661) | 0.047827 / 0.004250 (0.043576) | 0.041170 / 0.037052 (0.004118) | 0.252494 / 0.258489 (-0.005995) | 0.277493 / 0.293841 (-0.016348) | 0.027269 / 0.128546 (-0.101277) | 0.010380 / 0.075646 (-0.065266) | 0.204404 / 0.419271 (-0.214867) | 0.035251 / 0.043533 (-0.008282) | 0.244368 / 0.255139 (-0.010771) | 0.258003 / 0.283200 (-0.025197) | 0.016751 / 0.141683 (-0.124932) | 1.134108 / 1.452155 (-0.318047) | 1.159969 / 1.492716 (-0.332748) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.087011 / 0.018006 (0.069004) | 0.295577 / 0.000490 (0.295087) | 0.000213 / 0.000200 (0.000013) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017993 / 0.037411 (-0.019419) | 0.061690 / 0.014526 (0.047164) | 0.071791 / 0.176557 (-0.104765) | 0.118282 / 0.737135 (-0.618853) | 0.073453 / 0.296338 (-0.222885) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284764 / 0.215209 (0.069555) | 2.771791 / 2.077655 (0.694136) | 1.469614 / 1.504120 (-0.034506) | 1.334096 / 1.541195 (-0.207099) | 1.339995 / 1.468490 (-0.128495) | 0.562740 / 4.584777 (-4.022037) | 2.390219 / 3.745712 (-1.355493) | 2.679776 / 5.269862 (-2.590086) | 1.684397 / 4.565676 (-2.881279) | 0.062137 / 0.424275 (-0.362138) | 0.004934 / 0.007607 (-0.002673) | 0.336257 / 0.226044 (0.110212) | 3.256330 / 2.268929 (0.987401) | 1.801520 / 55.444624 (-53.643105) | 1.520662 / 6.876477 (-5.355815) | 1.537023 / 2.142072 (-0.605049) | 0.644360 / 4.805227 (-4.160867) | 0.115603 / 6.500664 (-6.385061) | 0.040601 / 0.075469 (-0.034868) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982992 / 1.841788 (-0.858796) | 11.002182 / 8.074308 (2.927873) | 9.564671 / 10.191392 (-0.626721) | 0.137682 / 0.680424 (-0.542742) | 0.013936 / 0.534201 (-0.520265) | 0.285898 / 0.579283 (-0.293385) | 0.264426 / 0.434364 (-0.169938) | 0.321615 / 0.540337 (-0.218723) | 0.420216 / 1.386936 (-0.966720) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005239 / 0.011353 (-0.006114) | 0.003165 / 0.011008 (-0.007844) | 0.048176 / 0.038508 (0.009668) | 0.030680 / 0.023109 (0.007571) | 0.258176 / 0.275898 (-0.017722) | 0.282342 / 0.323480 (-0.041138) | 0.004218 / 0.007986 (-0.003767) | 0.002616 / 0.004328 (-0.001713) | 0.047253 / 0.004250 (0.043003) | 0.044178 / 0.037052 (0.007126) | 0.276942 / 0.258489 (0.018453) | 0.312353 / 0.293841 (0.018512) | 0.046714 / 0.128546 (-0.081832) | 0.009892 / 0.075646 (-0.065755) | 0.056123 / 0.419271 (-0.363149) | 0.032691 / 0.043533 (-0.010842) | 0.268781 / 0.255139 (0.013642) | 0.285921 / 0.283200 (0.002722) | 0.016050 / 0.141683 (-0.125633) | 1.138058 / 1.452155 (-0.314096) | 1.193405 / 1.492716 (-0.299311) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089280 / 0.018006 (0.071273) | 0.288425 / 0.000490 (0.287935) | 0.000201 / 0.000200 (0.000001) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021536 / 0.037411 (-0.015875) | 0.075157 / 0.014526 (0.060631) | 0.088943 / 0.176557 (-0.087613) | 0.125191 / 0.737135 (-0.611945) | 0.087991 / 0.296338 (-0.208348) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285103 / 0.215209 (0.069894) | 2.791798 / 2.077655 (0.714144) | 1.518104 / 1.504120 (0.013984) | 1.388690 / 1.541195 (-0.152505) | 1.409896 / 1.468490 (-0.058594) | 0.554077 / 4.584777 (-4.030700) | 2.396994 / 3.745712 (-1.348718) | 2.596801 / 5.269862 (-2.673060) | 1.683761 / 4.565676 (-2.881915) | 0.061209 / 0.424275 (-0.363066) | 0.004735 / 0.007607 (-0.002873) | 0.337566 / 0.226044 (0.111522) | 3.258183 / 2.268929 (0.989254) | 1.886185 / 55.444624 (-53.558439) | 1.599148 / 6.876477 (-5.277329) | 1.726867 / 2.142072 (-0.415206) | 0.642784 / 4.805227 (-4.162444) | 0.114947 / 6.500664 (-6.385717) | 0.040450 / 0.075469 (-0.035019) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001316 / 1.841788 (-0.840472) | 11.695367 / 8.074308 (3.621058) | 9.854870 / 10.191392 (-0.336522) | 0.136462 / 0.680424 (-0.543961) | 0.016708 / 0.534201 (-0.517493) | 0.286421 / 0.579283 (-0.292862) | 0.270773 / 0.434364 (-0.163591) | 0.322947 / 0.540337 (-0.217390) | 0.416772 / 1.386936 (-0.970164) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6ba542847314bd349301937e59c3de04ce13aa5e \"CML watermark\")\n" ]
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6680/reactions" }
PR_kwDODunzps5nRHcz
{ "diff_url": "https://github.com/huggingface/datasets/pull/6680.diff", "html_url": "https://github.com/huggingface/datasets/pull/6680", "merged_at": "2024-02-19T10:00:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6680.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6680" }
2024-02-19T10:00:31Z
https://api.github.com/repos/huggingface/datasets/issues/6680/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6680/timeline
closed
false
6,680
null
2024-02-19T10:00:40Z
null
true
2,141,953,981
https://api.github.com/repos/huggingface/datasets/issues/6679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6679/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2024-02-28T06:56:35Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6679
MEMBER
completed
null
null
[]
Node.js 16 GitHub Actions are deprecated
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions" }
I_kwDODunzps5_q5-9
null
2024-02-19T09:47:37Z
https://api.github.com/repos/huggingface/datasets/issues/6679/comments
`Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/ We should update them to Node 20. See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678 > Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3, actions/setup-python@v4. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6679/timeline
closed
false
6,679
null
2024-02-28T06:56:35Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,141,902,154
https://api.github.com/repos/huggingface/datasets/issues/6678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6678/events
[]
null
2024-02-19T10:03:00Z
[]
https://github.com/huggingface/datasets/pull/6678
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6678). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005070 / 0.011353 (-0.006283) | 0.003685 / 0.011008 (-0.007323) | 0.063191 / 0.038508 (0.024683) | 0.030506 / 0.023109 (0.007397) | 0.258033 / 0.275898 (-0.017865) | 0.269790 / 0.323480 (-0.053690) | 0.004180 / 0.007986 (-0.003805) | 0.002811 / 0.004328 (-0.001517) | 0.048718 / 0.004250 (0.044467) | 0.043473 / 0.037052 (0.006421) | 0.267306 / 0.258489 (0.008817) | 0.290315 / 0.293841 (-0.003526) | 0.027402 / 0.128546 (-0.101144) | 0.010782 / 0.075646 (-0.064864) | 0.207243 / 0.419271 (-0.212029) | 0.035637 / 0.043533 (-0.007896) | 0.264032 / 0.255139 (0.008893) | 0.270450 / 0.283200 (-0.012749) | 0.017407 / 0.141683 (-0.124276) | 1.107481 / 1.452155 (-0.344674) | 1.163187 / 1.492716 (-0.329529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095065 / 0.018006 (0.077059) | 0.305169 / 0.000490 (0.304680) | 0.000221 / 0.000200 (0.000021) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017706 / 0.037411 (-0.019706) | 0.061431 / 0.014526 (0.046905) | 0.073541 / 0.176557 (-0.103016) | 0.117326 / 0.737135 (-0.619809) | 0.074368 / 0.296338 (-0.221971) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284533 / 0.215209 (0.069324) | 2.775230 / 2.077655 (0.697575) | 1.455196 / 1.504120 (-0.048924) | 1.357651 / 1.541195 (-0.183544) | 1.337477 / 1.468490 (-0.131013) | 0.567439 / 4.584777 (-4.017338) | 2.380612 / 3.745712 (-1.365100) | 2.792305 / 5.269862 (-2.477556) | 1.726501 / 4.565676 (-2.839176) | 0.061729 / 0.424275 (-0.362546) | 0.004928 / 0.007607 (-0.002679) | 0.331989 / 0.226044 (0.105944) | 3.301704 / 2.268929 (1.032776) | 1.805107 / 55.444624 (-53.639518) | 1.500434 / 6.876477 (-5.376043) | 1.535548 / 2.142072 (-0.606524) | 0.639490 / 4.805227 (-4.165737) | 0.115876 / 6.500664 (-6.384788) | 0.041895 / 0.075469 (-0.033574) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.993584 / 1.841788 (-0.848203) | 11.596680 / 8.074308 (3.522371) | 9.631726 / 10.191392 (-0.559666) | 0.141153 / 0.680424 (-0.539271) | 0.014077 / 0.534201 (-0.520124) | 0.288237 / 0.579283 (-0.291046) | 0.261213 / 0.434364 (-0.173151) | 0.323897 / 0.540337 (-0.216441) | 0.420350 / 1.386936 (-0.966586) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005275 / 0.011353 (-0.006078) | 0.003739 / 0.011008 (-0.007269) | 0.049801 / 0.038508 (0.011293) | 0.030544 / 0.023109 (0.007435) | 0.264835 / 0.275898 (-0.011063) | 0.297738 / 0.323480 (-0.025742) | 0.004487 / 0.007986 (-0.003499) | 0.002835 / 0.004328 (-0.001493) | 0.048091 / 0.004250 (0.043841) | 0.044375 / 0.037052 (0.007322) | 0.286538 / 0.258489 (0.028049) | 0.319561 / 0.293841 (0.025720) | 0.047925 / 0.128546 (-0.080621) | 0.010816 / 0.075646 (-0.064831) | 0.057940 / 0.419271 (-0.361331) | 0.033588 / 0.043533 (-0.009945) | 0.270075 / 0.255139 (0.014936) | 0.290441 / 0.283200 (0.007242) | 0.017173 / 0.141683 (-0.124509) | 1.164686 / 1.452155 (-0.287469) | 1.213205 / 1.492716 (-0.279511) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093408 / 0.018006 (0.075402) | 0.305525 / 0.000490 (0.305036) | 0.000235 / 0.000200 (0.000035) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021605 / 0.037411 (-0.015806) | 0.075479 / 0.014526 (0.060953) | 0.085990 / 0.176557 (-0.090567) | 0.124783 / 0.737135 (-0.612352) | 0.089108 / 0.296338 (-0.207230) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.306222 / 0.215209 (0.091013) | 2.987282 / 2.077655 (0.909627) | 1.664714 / 1.504120 (0.160594) | 1.523136 / 1.541195 (-0.018059) | 1.534112 / 1.468490 (0.065622) | 0.566347 / 4.584777 (-4.018430) | 2.438641 / 3.745712 (-1.307071) | 2.669048 / 5.269862 (-2.600814) | 1.732935 / 4.565676 (-2.832741) | 0.063460 / 0.424275 (-0.360815) | 0.004973 / 0.007607 (-0.002634) | 0.366233 / 0.226044 (0.140189) | 3.553578 / 2.268929 (1.284649) | 1.984343 / 55.444624 (-53.460281) | 1.711038 / 6.876477 (-5.165439) | 1.857346 / 2.142072 (-0.284726) | 0.651077 / 4.805227 (-4.154150) | 0.118670 / 6.500664 (-6.381994) | 0.041839 / 0.075469 (-0.033631) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008230 / 1.841788 (-0.833558) | 12.047403 / 8.074308 (3.973095) | 10.039053 / 10.191392 (-0.152339) | 0.141640 / 0.680424 (-0.538784) | 0.014758 / 0.534201 (-0.519443) | 0.285016 / 0.579283 (-0.294267) | 0.275461 / 0.434364 (-0.158903) | 0.325535 / 0.540337 (-0.214803) | 0.415871 / 1.386936 (-0.971065) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d2268261bf0fb3eed8faae6bc1fa20a25b4382c \"CML watermark\")\n" ]
Release: 2.17.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6678/reactions" }
PR_kwDODunzps5nQ2ZO
{ "diff_url": "https://github.com/huggingface/datasets/pull/6678.diff", "html_url": "https://github.com/huggingface/datasets/pull/6678", "merged_at": "2024-02-19T09:56:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/6678.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6678" }
2024-02-19T09:24:29Z
https://api.github.com/repos/huggingface/datasets/issues/6678/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6678/timeline
closed
false
6,678
null
2024-02-19T09:56:52Z
null
true
2,141,244,167
https://api.github.com/repos/huggingface/datasets/issues/6677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6677/events
[]
null
2024-02-28T18:57:39Z
[]
https://github.com/huggingface/datasets/pull/6677
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6677). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007162 / 0.011353 (-0.004191) | 0.004125 / 0.011008 (-0.006883) | 0.064011 / 0.038508 (0.025503) | 0.031795 / 0.023109 (0.008686) | 0.248761 / 0.275898 (-0.027137) | 0.275130 / 0.323480 (-0.048350) | 0.003138 / 0.007986 (-0.004847) | 0.002736 / 0.004328 (-0.001592) | 0.050515 / 0.004250 (0.046264) | 0.044787 / 0.037052 (0.007735) | 0.261997 / 0.258489 (0.003507) | 0.292170 / 0.293841 (-0.001671) | 0.028122 / 0.128546 (-0.100424) | 0.010780 / 0.075646 (-0.064866) | 0.208805 / 0.419271 (-0.210467) | 0.036362 / 0.043533 (-0.007171) | 0.251599 / 0.255139 (-0.003540) | 0.271200 / 0.283200 (-0.012000) | 0.020215 / 0.141683 (-0.121468) | 1.133352 / 1.452155 (-0.318803) | 1.185240 / 1.492716 (-0.307477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089990 / 0.018006 (0.071984) | 0.298099 / 0.000490 (0.297609) | 0.000221 / 0.000200 (0.000021) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018432 / 0.037411 (-0.018980) | 0.062641 / 0.014526 (0.048115) | 0.075210 / 0.176557 (-0.101346) | 0.122239 / 0.737135 (-0.614897) | 0.078914 / 0.296338 (-0.217424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287682 / 0.215209 (0.072473) | 2.815030 / 2.077655 (0.737375) | 1.499512 / 1.504120 (-0.004607) | 1.370210 / 1.541195 (-0.170985) | 1.381944 / 1.468490 (-0.086546) | 0.571645 / 4.584777 (-4.013132) | 2.377773 / 3.745712 (-1.367939) | 2.757206 / 5.269862 (-2.512655) | 1.717159 / 4.565676 (-2.848518) | 0.063038 / 0.424275 (-0.361237) | 0.004913 / 0.007607 (-0.002694) | 0.340854 / 0.226044 (0.114810) | 3.348087 / 2.268929 (1.079159) | 1.843123 / 55.444624 (-53.601502) | 1.569714 / 6.876477 (-5.306763) | 1.593791 / 2.142072 (-0.548281) | 0.642865 / 4.805227 (-4.162362) | 0.116933 / 6.500664 (-6.383731) | 0.041891 / 0.075469 (-0.033578) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.976453 / 1.841788 (-0.865334) | 12.229986 / 8.074308 (4.155678) | 9.617912 / 10.191392 (-0.573480) | 0.141292 / 0.680424 (-0.539132) | 0.013732 / 0.534201 (-0.520469) | 0.291424 / 0.579283 (-0.287859) | 0.264748 / 0.434364 (-0.169616) | 0.345262 / 0.540337 (-0.195075) | 0.445126 / 1.386936 (-0.941810) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005286 / 0.011353 (-0.006067) | 0.003749 / 0.011008 (-0.007259) | 0.049070 / 0.038508 (0.010562) | 0.031779 / 0.023109 (0.008670) | 0.275636 / 0.275898 (-0.000262) | 0.296956 / 0.323480 (-0.026524) | 0.004278 / 0.007986 (-0.003708) | 0.002702 / 0.004328 (-0.001626) | 0.049658 / 0.004250 (0.045408) | 0.046025 / 0.037052 (0.008973) | 0.293238 / 0.258489 (0.034749) | 0.316676 / 0.293841 (0.022835) | 0.029277 / 0.128546 (-0.099269) | 0.010096 / 0.075646 (-0.065550) | 0.059861 / 0.419271 (-0.359411) | 0.054310 / 0.043533 (0.010778) | 0.275025 / 0.255139 (0.019886) | 0.292995 / 0.283200 (0.009796) | 0.018448 / 0.141683 (-0.123235) | 1.150805 / 1.452155 (-0.301350) | 1.178310 / 1.492716 (-0.314406) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092644 / 0.018006 (0.074638) | 0.297979 / 0.000490 (0.297489) | 0.000207 / 0.000200 (0.000007) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021758 / 0.037411 (-0.015654) | 0.076734 / 0.014526 (0.062208) | 0.088522 / 0.176557 (-0.088034) | 0.126190 / 0.737135 (-0.610945) | 0.090466 / 0.296338 (-0.205873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305355 / 0.215209 (0.090146) | 2.978927 / 2.077655 (0.901272) | 1.612312 / 1.504120 (0.108192) | 1.485829 / 1.541195 (-0.055366) | 1.513303 / 1.468490 (0.044813) | 0.592368 / 4.584777 (-3.992409) | 2.448529 / 3.745712 (-1.297183) | 2.713460 / 5.269862 (-2.556402) | 1.803859 / 4.565676 (-2.761817) | 0.065630 / 0.424275 (-0.358645) | 0.005072 / 0.007607 (-0.002535) | 0.358340 / 0.226044 (0.132295) | 3.528516 / 2.268929 (1.259588) | 1.977901 / 55.444624 (-53.466723) | 1.692526 / 6.876477 (-5.183950) | 1.858405 / 2.142072 (-0.283668) | 0.676169 / 4.805227 (-4.129059) | 0.121136 / 6.500664 (-6.379528) | 0.041384 / 0.075469 (-0.034085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011801 / 1.841788 (-0.829987) | 12.496459 / 8.074308 (4.422151) | 10.465659 / 10.191392 (0.274267) | 0.154121 / 0.680424 (-0.526302) | 0.016796 / 0.534201 (-0.517405) | 0.288908 / 0.579283 (-0.290376) | 0.274328 / 0.434364 (-0.160036) | 0.322366 / 0.540337 (-0.217971) | 0.423498 / 1.386936 (-0.963438) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#52b9273b5ddbcadfdb512a693bc813b21e863b1b \"CML watermark\")\n" ]
Pass through information about location of cache directory.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6677/reactions" }
PR_kwDODunzps5nOmo_
{ "diff_url": "https://github.com/huggingface/datasets/pull/6677.diff", "html_url": "https://github.com/huggingface/datasets/pull/6677", "merged_at": "2024-02-28T18:51:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/6677.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6677" }
2024-02-18T23:48:57Z
https://api.github.com/repos/huggingface/datasets/issues/6677/comments
If cache directory is set, information is not passed through. Pass download config in as an arg too.
{ "avatar_url": "https://avatars.githubusercontent.com/u/94808782?v=4", "events_url": "https://api.github.com/users/stridge-cruxml/events{/privacy}", "followers_url": "https://api.github.com/users/stridge-cruxml/followers", "following_url": "https://api.github.com/users/stridge-cruxml/following{/other_user}", "gists_url": "https://api.github.com/users/stridge-cruxml/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stridge-cruxml", "id": 94808782, "login": "stridge-cruxml", "node_id": "U_kgDOBaaqzg", "organizations_url": "https://api.github.com/users/stridge-cruxml/orgs", "received_events_url": "https://api.github.com/users/stridge-cruxml/received_events", "repos_url": "https://api.github.com/users/stridge-cruxml/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stridge-cruxml/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stridge-cruxml/subscriptions", "type": "User", "url": "https://api.github.com/users/stridge-cruxml" }
https://api.github.com/repos/huggingface/datasets/issues/6677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6677/timeline
closed
false
6,677
null
2024-02-28T18:51:15Z
null
true
2,140,648,619
https://api.github.com/repos/huggingface/datasets/issues/6676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6676/events
[]
null
2024-03-02T20:47:22Z
[]
https://github.com/huggingface/datasets/issues/6676
NONE
null
null
null
[ "Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?", "I don't think we should filter for `*.json` as this might silently remove desired files for many users. And this could be a major breaking change for many organizations.\r\n\r\nYou could do the globbing yourself which would keep the code clean.\r\n\r\n```python\r\nfrom glob import glob\r\n\r\nDataset.from_json(glob('folder/*.json'))\r\n```", "I think it should still be fine to log a warning message in case the folder contains different files? I also don't get why would this be breaking as in the end using `from_FILE_TYPE` should be able to read a specific file type only. Maybe some other use case I am not aware of but since globbing or this case not mentioned anywhere in the doc, I spent quite a bit of time trying to figure out where the issue was. Just making sure it's clear for users." ]
Can't Read List of JSON Files Properly
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions" }
I_kwDODunzps5_l7Sr
null
2024-02-17T22:58:15Z
https://api.github.com/repos/huggingface/datasets/issues/6676/comments
### Describe the bug Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging: The code fails with ``` ArrowInvalid: JSON parse error: Invalid value. in row 0 UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug This doesn't work ``` from datasets import Dataset # dir contains 100 json files. Dataset.from_json("/PUT SOME PATH HERE/*") ``` This works: ``` from datasets import concatenate_datasets ls_ds = [] for file in list_of_json_files: ls_ds.append(Dataset.from_json(file)) ds = concatenate_datasets(ls_ds) ``` ### Expected behavior I expect this to read json files properly as error is not clear ### Environment info - `datasets` version: 2.17.0 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4", "events_url": "https://api.github.com/users/lordsoffallen/events{/privacy}", "followers_url": "https://api.github.com/users/lordsoffallen/followers", "following_url": "https://api.github.com/users/lordsoffallen/following{/other_user}", "gists_url": "https://api.github.com/users/lordsoffallen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lordsoffallen", "id": 20232088, "login": "lordsoffallen", "node_id": "MDQ6VXNlcjIwMjMyMDg4", "organizations_url": "https://api.github.com/users/lordsoffallen/orgs", "received_events_url": "https://api.github.com/users/lordsoffallen/received_events", "repos_url": "https://api.github.com/users/lordsoffallen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lordsoffallen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lordsoffallen/subscriptions", "type": "User", "url": "https://api.github.com/users/lordsoffallen" }
https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6676/timeline
open
false
6,676
null
null
null
false
2,139,640,381
https://api.github.com/repos/huggingface/datasets/issues/6675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6675/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-03-18T15:41:34Z
[]
https://github.com/huggingface/datasets/issues/6675
NONE
completed
null
null
[ "It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.cast_column(\"image\", Image(mode=...))\r\n```" ]
Allow image model (color conversion) to be specified as part of datasets Image() decode
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6675/reactions" }
I_kwDODunzps5_iFI9
null
2024-02-16T23:43:20Z
https://api.github.com/repos/huggingface/datasets/issues/6675/comments
### Feature request Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.org/vision/main/generated/torchvision.io.decode_jpeg.html, and similarly in tensorflow.data pipelines decode_jpeg or https://www.tensorflow.org/api_docs/python/tf/io/decode_and_crop_jpeg have a channels arg that allows controlling the image mode in the decode step. datasets currently requires this pattern (from [examples](https://huggingface.co/docs/datasets/main/en/image_process)): ``` from torchvision.transforms import Compose, ColorJitter, ToTensor jitter = Compose( [ ColorJitter(brightness=0.25, contrast=0.25, saturation=0.25, hue=0.7), ToTensor(), ] ) def transforms(examples): examples["pixel_values"] = [jitter(image.convert("RGB")) for image in examples["image"]] return examples ``` ### Motivation It would be nice to be able to handle `image.convert("RGB")` (or other modes) in the decode step, before applying torchvision transforms, this would reduce differences in code when handling pipelines that can handle torchvision, webdatset, or hf datasets with fewer code differences and without needing to handle image mode argument passing in two different stages of the pipelines... ### Your contribution Can do a PR with guidance on how mode should be passed / set on the dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rwightman", "id": 5702664, "login": "rwightman", "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "organizations_url": "https://api.github.com/users/rwightman/orgs", "received_events_url": "https://api.github.com/users/rwightman/received_events", "repos_url": "https://api.github.com/users/rwightman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "type": "User", "url": "https://api.github.com/users/rwightman" }
https://api.github.com/repos/huggingface/datasets/issues/6675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6675/timeline
closed
false
6,675
null
2024-03-18T15:41:34Z
null
false
2,139,595,576
https://api.github.com/repos/huggingface/datasets/issues/6674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6674/events
[]
null
2024-02-25T18:48:09Z
[]
https://github.com/huggingface/datasets/issues/6674
CONTRIBUTOR
completed
null
null
[ "Good catch! Feel free to open a PR to fix the link." ]
Depprcated Overview.ipynb Link to new Quickstart Notebook invalid
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6674/reactions" }
I_kwDODunzps5_h6M4
null
2024-02-16T22:51:35Z
https://api.github.com/repos/huggingface/datasets/issues/6674/comments
### Describe the bug For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken. ### Steps to reproduce the bug Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quickstart.ipynb) link in the notebook. ### Expected behavior I believe is it suposed to link [here](https://github.com/huggingface/notebooks/blob/main/datasets_doc/en/quickstart.ipynb). That is mentioned in the readme. ### Environment info Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4", "events_url": "https://api.github.com/users/Codeblockz/events{/privacy}", "followers_url": "https://api.github.com/users/Codeblockz/followers", "following_url": "https://api.github.com/users/Codeblockz/following{/other_user}", "gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Codeblockz", "id": 55932554, "login": "Codeblockz", "node_id": "MDQ6VXNlcjU1OTMyNTU0", "organizations_url": "https://api.github.com/users/Codeblockz/orgs", "received_events_url": "https://api.github.com/users/Codeblockz/received_events", "repos_url": "https://api.github.com/users/Codeblockz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions", "type": "User", "url": "https://api.github.com/users/Codeblockz" }
https://api.github.com/repos/huggingface/datasets/issues/6674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6674/timeline
closed
false
6,674
null
2024-02-25T18:48:09Z
null
false
2,139,522,827
https://api.github.com/repos/huggingface/datasets/issues/6673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6673/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
null
2024-07-01T17:45:31Z
[]
https://github.com/huggingface/datasets/issues/6673
NONE
completed
null
null
[]
IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6673/reactions" }
I_kwDODunzps5_hocL
null
2024-02-16T21:38:12Z
https://api.github.com/repos/huggingface/datasets/issues/6673/comments
### Describe the bug When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes. PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does not. In my own use of IterableDatasets I usually track the epoch count which crosses process boundaries in a multiprocessing.Value ### Steps to reproduce the bug Use a streaming dataset (Iterable) w/ the recommended pattern below and `persistent_workers=True` in the torch DataLoader. ``` for epoch in range(epochs): shuffled_dataset.set_epoch(epoch) for example in shuffled_dataset: ... ``` ### Expected behavior When the canonical bit of code above is used with `num_workers > 0` and `persistent_workers=True`, the epoch set via `set_epoch()` is propagated to the IterableDataset instances in the worker processes ### Environment info N/A
{ "avatar_url": "https://avatars.githubusercontent.com/u/5702664?v=4", "events_url": "https://api.github.com/users/rwightman/events{/privacy}", "followers_url": "https://api.github.com/users/rwightman/followers", "following_url": "https://api.github.com/users/rwightman/following{/other_user}", "gists_url": "https://api.github.com/users/rwightman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rwightman", "id": 5702664, "login": "rwightman", "node_id": "MDQ6VXNlcjU3MDI2NjQ=", "organizations_url": "https://api.github.com/users/rwightman/orgs", "received_events_url": "https://api.github.com/users/rwightman/received_events", "repos_url": "https://api.github.com/users/rwightman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rwightman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rwightman/subscriptions", "type": "User", "url": "https://api.github.com/users/rwightman" }
https://api.github.com/repos/huggingface/datasets/issues/6673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6673/timeline
closed
false
6,673
null
2024-07-01T17:45:31Z
null
false
2,138,732,288
https://api.github.com/repos/huggingface/datasets/issues/6672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6672/events
[]
null
2024-02-19T09:26:34Z
[]
https://github.com/huggingface/datasets/pull/6672
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I am merging this PR (so that it is included in the next patch release) to remove the deprecation warning raised by the CSV builder from pandas 2.2.0.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005374 / 0.011353 (-0.005979) | 0.003833 / 0.011008 (-0.007175) | 0.063465 / 0.038508 (0.024957) | 0.029564 / 0.023109 (0.006455) | 0.252759 / 0.275898 (-0.023139) | 0.274726 / 0.323480 (-0.048754) | 0.004014 / 0.007986 (-0.003971) | 0.002754 / 0.004328 (-0.001574) | 0.049351 / 0.004250 (0.045101) | 0.041858 / 0.037052 (0.004806) | 0.269023 / 0.258489 (0.010534) | 0.290670 / 0.293841 (-0.003171) | 0.028435 / 0.128546 (-0.100111) | 0.010988 / 0.075646 (-0.064658) | 0.207447 / 0.419271 (-0.211824) | 0.035945 / 0.043533 (-0.007588) | 0.257336 / 0.255139 (0.002197) | 0.267310 / 0.283200 (-0.015890) | 0.018575 / 0.141683 (-0.123108) | 1.144515 / 1.452155 (-0.307640) | 1.214614 / 1.492716 (-0.278102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103527 / 0.018006 (0.085521) | 0.310607 / 0.000490 (0.310117) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018597 / 0.037411 (-0.018814) | 0.063176 / 0.014526 (0.048650) | 0.073553 / 0.176557 (-0.103003) | 0.120648 / 0.737135 (-0.616487) | 0.075625 / 0.296338 (-0.220713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289148 / 0.215209 (0.073939) | 2.798351 / 2.077655 (0.720696) | 1.487909 / 1.504120 (-0.016211) | 1.369945 / 1.541195 (-0.171250) | 1.378889 / 1.468490 (-0.089602) | 0.569825 / 4.584777 (-4.014952) | 2.413309 / 3.745712 (-1.332403) | 2.795668 / 5.269862 (-2.474193) | 1.757748 / 4.565676 (-2.807929) | 0.064686 / 0.424275 (-0.359589) | 0.005027 / 0.007607 (-0.002580) | 0.341835 / 0.226044 (0.115791) | 3.349915 / 2.268929 (1.080987) | 1.864253 / 55.444624 (-53.580371) | 1.595788 / 6.876477 (-5.280688) | 1.666127 / 2.142072 (-0.475945) | 0.665239 / 4.805227 (-4.139989) | 0.120563 / 6.500664 (-6.380101) | 0.043649 / 0.075469 (-0.031820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988543 / 1.841788 (-0.853244) | 11.973275 / 8.074308 (3.898967) | 9.685401 / 10.191392 (-0.505991) | 0.141416 / 0.680424 (-0.539008) | 0.014328 / 0.534201 (-0.519873) | 0.287063 / 0.579283 (-0.292220) | 0.266284 / 0.434364 (-0.168080) | 0.324643 / 0.540337 (-0.215694) | 0.423845 / 1.386936 (-0.963091) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003770 / 0.011008 (-0.007239) | 0.050879 / 0.038508 (0.012371) | 0.031929 / 0.023109 (0.008819) | 0.297739 / 0.275898 (0.021841) | 0.319380 / 0.323480 (-0.004100) | 0.004348 / 0.007986 (-0.003637) | 0.002783 / 0.004328 (-0.001545) | 0.050024 / 0.004250 (0.045774) | 0.045209 / 0.037052 (0.008157) | 0.307608 / 0.258489 (0.049119) | 0.338168 / 0.293841 (0.044327) | 0.051712 / 0.128546 (-0.076834) | 0.011092 / 0.075646 (-0.064554) | 0.059830 / 0.419271 (-0.359441) | 0.033894 / 0.043533 (-0.009638) | 0.295278 / 0.255139 (0.040139) | 0.310749 / 0.283200 (0.027550) | 0.018676 / 0.141683 (-0.123007) | 1.201086 / 1.452155 (-0.251069) | 1.258214 / 1.492716 (-0.234502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094079 / 0.018006 (0.076073) | 0.304657 / 0.000490 (0.304168) | 0.000225 / 0.000200 (0.000026) | 0.000057 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021969 / 0.037411 (-0.015442) | 0.075749 / 0.014526 (0.061223) | 0.087878 / 0.176557 (-0.088679) | 0.126022 / 0.737135 (-0.611114) | 0.089466 / 0.296338 (-0.206873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286415 / 0.215209 (0.071206) | 2.831867 / 2.077655 (0.754212) | 1.584119 / 1.504120 (0.079999) | 1.468454 / 1.541195 (-0.072740) | 1.495831 / 1.468490 (0.027341) | 0.579569 / 4.584777 (-4.005208) | 2.477248 / 3.745712 (-1.268464) | 2.830536 / 5.269862 (-2.439325) | 1.820188 / 4.565676 (-2.745488) | 0.064408 / 0.424275 (-0.359867) | 0.005156 / 0.007607 (-0.002451) | 0.342391 / 0.226044 (0.116347) | 3.424380 / 2.268929 (1.155452) | 1.993110 / 55.444624 (-53.451514) | 1.702971 / 6.876477 (-5.173506) | 1.844281 / 2.142072 (-0.297792) | 0.668208 / 4.805227 (-4.137020) | 0.120306 / 6.500664 (-6.380358) | 0.042127 / 0.075469 (-0.033342) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.019118 / 1.841788 (-0.822670) | 12.418330 / 8.074308 (4.344022) | 10.474226 / 10.191392 (0.282834) | 0.148510 / 0.680424 (-0.531914) | 0.015107 / 0.534201 (-0.519094) | 0.289488 / 0.579283 (-0.289795) | 0.278149 / 0.434364 (-0.156215) | 0.334655 / 0.540337 (-0.205682) | 0.419127 / 1.386936 (-0.967809) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#58733d2824192fc748cc8730cf77c33be5ded2ea \"CML watermark\")\n" ]
Remove deprecated verbose parameter from CSV builder
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6672/reactions" }
PR_kwDODunzps5nGAlw
{ "diff_url": "https://github.com/huggingface/datasets/pull/6672.diff", "html_url": "https://github.com/huggingface/datasets/pull/6672", "merged_at": "2024-02-19T09:20:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/6672.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6672" }
2024-02-16T14:26:21Z
https://api.github.com/repos/huggingface/datasets/issues/6672/comments
Remove deprecated `verbose` parameter from CSV builder. Note that the `verbose` parameter is deprecated since pandas 2.2.0. See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450 Fix #6671.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6672/timeline
closed
false
6,672
null
2024-02-19T09:20:22Z
null
true
2,138,727,870
https://api.github.com/repos/huggingface/datasets/issues/6671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6671/events
[]
null
2024-02-19T09:20:23Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6671
MEMBER
completed
null
null
[]
CSV builder raises deprecation warning on verbose parameter
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6671/reactions" }
I_kwDODunzps5_emW-
null
2024-02-16T14:23:46Z
https://api.github.com/repos/huggingface/datasets/issues/6671/comments
CSV builder raises a deprecation warning on `verbose` parameter: ``` FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version. ``` See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6671/timeline
closed
false
6,671
null
2024-02-19T09:20:23Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,138,372,958
https://api.github.com/repos/huggingface/datasets/issues/6670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6670/events
[]
null
2024-02-17T04:26:34Z
[]
https://github.com/huggingface/datasets/issues/6670
NONE
completed
null
null
[ "Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923", "Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggingface/datasets/issues/6670> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6670#event-11829788289>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YDQOBUFUWMR4C5O3QTYT5WDJAVCNFSM6AAAAABDL24S5SVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHAZDSNZYHAZDQOI>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
ValueError
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6670/reactions" }
I_kwDODunzps5_dPte
null
2024-02-16T11:05:17Z
https://api.github.com/repos/huggingface/datasets/issues/6670/comments
### Describe the bug ValueError Traceback (most recent call last) [<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>() 9 import numpy as np 10 import matplotlib.pyplot as plt ---> 11 from datasets import DatasetDict, Dataset 12 from transformers import AutoTokenizer, AutoModelForSequenceClassification 13 from transformers import Trainer, TrainingArguments 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 16 __version__ = "2.17.0" 17 ---> 18 from .arrow_dataset import Dataset 19 from .arrow_reader import ReadInstruction 20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 65 66 from . import config ---> 67 from .arrow_reader import ArrowReader 68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 69 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 27 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 31 [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module> 18 # flake8: noqa 19 ---> 20 from .core import * [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module> 34 import pyarrow as pa 35 import pyarrow.lib as lib ---> 36 import pyarrow._parquet as _parquet 37 38 from pyarrow._parquet import (ParquetReader, Statistics, # noqa /usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet() ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Expected behavior Resolve the binary incompatibility ### Environment info Google Colab Note book
{ "avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4", "events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}", "followers_url": "https://api.github.com/users/prashanth19bolukonda/followers", "following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}", "gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prashanth19bolukonda", "id": 112316000, "login": "prashanth19bolukonda", "node_id": "U_kgDOBrHOYA", "organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs", "received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events", "repos_url": "https://api.github.com/users/prashanth19bolukonda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions", "type": "User", "url": "https://api.github.com/users/prashanth19bolukonda" }
https://api.github.com/repos/huggingface/datasets/issues/6670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6670/timeline
closed
false
6,670
null
2024-02-16T14:43:53Z
null
false
2,138,322,662
https://api.github.com/repos/huggingface/datasets/issues/6669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6669/events
[]
null
2024-03-01T10:58:00Z
[]
https://github.com/huggingface/datasets/issues/6669
NONE
completed
null
null
[ "Hi! Kaggle notebooks use an outdated version of `datasets`, so you should update the `datasets` installation (with `!pip install -U datasets`) to avoid the error.", "Thank you for your response\r\n\r\nOn Thu, Feb 29, 2024 at 10:55 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Closed #6669 <https://github.com/huggingface/datasets/issues/6669> as\r\n> completed.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6669#event-11969246964>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/A2Y44YG2RRVMYONNKPLBVE3YV5SAPAVCNFSM6AAAAABDLZ3BTSVHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJRHE3DSMRUGY4TMNA>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n" ]
attribute error when writing trainer.train()
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6669/reactions" }
I_kwDODunzps5_dDbm
null
2024-02-16T10:40:49Z
https://api.github.com/repos/huggingface/datasets/issues/6669/comments
### Describe the bug AttributeError Traceback (most recent call last) Cell In[39], line 2 1 # Start the training process ----> 2 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1537 hf_hub_utils.enable_progress_bars() 1538 else: -> 1539 return inner_training_loop( 1540 args=args, 1541 resume_from_checkpoint=resume_from_checkpoint, 1542 trial=trial, 1543 ignore_keys_for_eval=ignore_keys_for_eval, 1544 ) File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1833 rng_to_sync = True 1835 step = -1 -> 1836 for step, inputs in enumerate(epoch_iterator): 1837 total_batched_samples += 1 1839 if self.args.include_num_input_tokens_seen: File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self) 449 # We iterate one batch ahead to check when we are at the end 450 try: --> 451 current_batch = next(dataloader_iter) 452 except StopIteration: 453 yield File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self) 627 if self._sampler_iter is None: 628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ 633 self._IterableDataset_len_called is not None and \ 634 self._num_yielded > self._IterableDataset_len_called: File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key) 1762 def __getitem__(self, key): # noqa: F811 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 1764 return self._getitem( 1765 key, 1766 ) File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs) 1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 1749 formatted_output = format_table( 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns) 538 else: 539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns) --> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type) 541 if output_all_columns: 542 if isinstance(formatted_output, MutableMapping): File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table) 56 def format_row(self, pa_table: pa.Table) -> dict: ---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table) 58 return self.recursive_tensorize(row) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table) 153 def extract_row(self, pa_table: pa.Table) -> dict: --> 154 return _unnest(self.extract_batch(pa_table)) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: --> 196 if any( 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: 196 if any( --> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr) 319 warnings.warn( 320 f"In the future `np.{attr}` will be defined as the " 321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2) 323 if attr in __former_attrs__: --> 324 raise AttributeError(__former_attrs__[attr]) 326 if attr == 'testing': 327 import numpy.testing as testing AttributeError: module 'numpy' has no attribute 'object'. `np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecationsAttributeError Traceback (most recent call last) Cell In[39], line 2 1 # Start the training process ----> 2 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1537 hf_hub_utils.enable_progress_bars() 1538 else: -> 1539 return inner_training_loop( 1540 args=args, 1541 resume_from_checkpoint=resume_from_checkpoint, 1542 trial=trial, 1543 ignore_keys_for_eval=ignore_keys_for_eval, 1544 ) File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1836, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1833 rng_to_sync = True 1835 step = -1 -> 1836 for step, inputs in enumerate(epoch_iterator): 1837 total_batched_samples += 1 1839 if self.args.include_num_input_tokens_seen: File /opt/conda/lib/python3.10/site-packages/accelerate/data_loader.py:451, in DataLoaderShard.__iter__(self) 449 # We iterate one batch ahead to check when we are at the end 450 try: --> 451 current_batch = next(dataloader_iter) 452 except StopIteration: 453 yield File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self) 627 if self._sampler_iter is None: 628 # TODO([https://github.com/pytorch/pytorch/issues/76750)](https://github.com/pytorch/pytorch/issues/76750)%3C/span%3E) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ 633 self._IterableDataset_len_called is not None and \ 634 self._num_yielded > self._IterableDataset_len_called: File /opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:674, in _SingleProcessDataLoaderIter._next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in _MapDatasetFetcher.fetch(self, possibly_batched_index) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py:51, in <listcomp>(.0) 49 data = self.dataset.__getitems__(possibly_batched_index) 50 else: ---> 51 data = [self.dataset[idx] for idx in possibly_batched_index] 52 else: 53 data = self.dataset[possibly_batched_index] File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1764, in Dataset.__getitem__(self, key) 1762 def __getitem__(self, key): # noqa: F811 1763 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 1764 return self._getitem( 1765 key, 1766 ) File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:1749, in Dataset._getitem(self, key, decoded, **kwargs) 1747 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 1748 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 1749 formatted_output = format_table( 1750 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 1751 ) 1752 return formatted_output File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:540, in format_table(table, key, formatter, format_columns, output_all_columns) 538 else: 539 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns) --> 540 formatted_output = formatter(pa_table_to_format, query_type=query_type) 541 if output_all_columns: 542 if isinstance(formatted_output, MutableMapping): File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/torch_formatter.py:57, in TorchFormatter.format_row(self, pa_table) 56 def format_row(self, pa_table: pa.Table) -> dict: ---> 57 row = self.numpy_arrow_extractor().extract_row(pa_table) 58 return self.recursive_tensorize(row) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:154, in NumpyArrowExtractor.extract_row(self, pa_table) 153 def extract_row(self, pa_table: pa.Table) -> dict: --> 154 return _unnest(self.extract_batch(pa_table)) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in NumpyArrowExtractor.extract_batch(self, pa_table) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:160, in <dictcomp>(.0) 159 def extract_batch(self, pa_table: pa.Table) -> dict: --> 160 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:196, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: --> 196 if any( 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/datasets/formatting/formatting.py:197, in <genexpr>(.0) 194 array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist() 195 if len(array) > 0: 196 if any( --> 197 (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) 198 or (isinstance(x, float) and np.isnan(x)) 199 for x in array 200 ): 201 return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) 202 return np.array(array, copy=False, **self.np_array_kwargs) File /opt/conda/lib/python3.10/site-packages/numpy/__init__.py:324, in __getattr__(attr) 319 warnings.warn( 320 f"In the future `np.{attr}` will be defined as the " 321 "corresponding NumPy scalar.", FutureWarning, stacklevel=2) 323 if attr in __former_attrs__: --> 324 raise AttributeError(__former_attrs__[attr]) 326 if attr == 'testing': 327 import numpy.testing as testing AttributeError: module 'numpy' has no attribute 'object'. `np.object` was a deprecated alias for the builtin `object`. To avoid this error in existing code, use `object` by itself. Doing this will not modify any behavior and is safe. The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations Please help me to resolve the above error ### Steps to reproduce the bug Please resolve the issue of deprecated function np.object to object in the numpy ### Expected behavior np.object should be written as object only ### Environment info kaggle notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/112316000?v=4", "events_url": "https://api.github.com/users/prashanth19bolukonda/events{/privacy}", "followers_url": "https://api.github.com/users/prashanth19bolukonda/followers", "following_url": "https://api.github.com/users/prashanth19bolukonda/following{/other_user}", "gists_url": "https://api.github.com/users/prashanth19bolukonda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prashanth19bolukonda", "id": 112316000, "login": "prashanth19bolukonda", "node_id": "U_kgDOBrHOYA", "organizations_url": "https://api.github.com/users/prashanth19bolukonda/orgs", "received_events_url": "https://api.github.com/users/prashanth19bolukonda/received_events", "repos_url": "https://api.github.com/users/prashanth19bolukonda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prashanth19bolukonda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prashanth19bolukonda/subscriptions", "type": "User", "url": "https://api.github.com/users/prashanth19bolukonda" }
https://api.github.com/repos/huggingface/datasets/issues/6669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6669/timeline
closed
false
6,669
null
2024-02-29T17:25:17Z
null
false
2,137,859,935
https://api.github.com/repos/huggingface/datasets/issues/6668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6668/events
[]
null
2024-02-16T04:40:56Z
[]
https://github.com/huggingface/datasets/issues/6668
NONE
null
null
null
[]
Chapter 6 - Issue Loading `cnn_dailymail` dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6668/reactions" }
I_kwDODunzps5_bSdf
null
2024-02-16T04:40:56Z
https://api.github.com/repos/huggingface/datasets/issues/6668/comments
### Describe the bug So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code: `dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")` Error Message: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[4], line 4 1 #hide_output 2 from datasets import load_dataset ----> 4 dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0") 7 # dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0", trust_remote_code=True) 8 print(f"Features: {dataset['train'].column_names}") File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2583 # Build dataset for splits 2584 keep_in_memory = ( 2585 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2586 ) -> 2587 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2588 # Rename and cast features to match task schema 2589 if task is not None: 2590 # To avoid issuing the same warning twice File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1244, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1241 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1243 # Create a dataset for each of the given splits -> 1244 datasets = map_nested( 1245 partial( 1246 self._build_single_dataset, 1247 run_post_process=run_post_process, 1248 verification_mode=verification_mode, 1249 in_memory=in_memory, 1250 ), 1251 split, 1252 map_tuple=True, 1253 disable_tqdm=True, 1254 ) 1255 if isinstance(datasets, dict): 1256 datasets = DatasetDict(datasets) File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:477, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc) 466 mapped = [ 467 map_nested( 468 function=function, (...) 474 for obj in iterable 475 ] 476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length: --> 477 mapped = [ 478 _single_map_nested((function, obj, types, None, True, None)) 479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 480 ] 481 else: 482 with warnings.catch_warnings(): File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:478, in <listcomp>(.0) 466 mapped = [ 467 map_nested( 468 function=function, (...) 474 for obj in iterable 475 ] 476 elif num_proc != -1 and num_proc <= 1 or len(iterable) < parallel_min_length: 477 mapped = [ --> 478 _single_map_nested((function, obj, types, None, True, None)) 479 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc) 480 ] 481 else: 482 with warnings.catch_warnings(): File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\utils\py_utils.py:370, in _single_map_nested(args) 368 # Singleton first to spare some computation 369 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 370 return function(data_struct) 372 # Reduce logging to keep things readable in multiprocessing with tqdm 373 if rank is not None and logging.get_verbosity() < logging.WARNING: File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1274, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1271 split = Split(split) 1273 # Build base dataset -> 1274 ds = self._as_dataset( 1275 split=split, 1276 in_memory=in_memory, 1277 ) 1278 if run_post_process: 1279 for resource_file_name in self._post_processing_resources(split).values(): File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\builder.py:1348, in DatasetBuilder._as_dataset(self, split, in_memory) 1346 if self._check_legacy_cache(): 1347 dataset_name = self.name -> 1348 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1349 name=dataset_name, 1350 instructions=split, 1351 split_infos=self.info.splits.values(), 1352 in_memory=in_memory, 1353 ) 1354 fingerprint = self._get_dataset_fingerprint(split) 1355 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File ~\anaconda3\envs\nlp-transformers\lib\site-packages\datasets\arrow_reader.py:254, in BaseReader.read(self, name, instructions, split_infos, in_memory) 252 if not files: 253 msg = f'Instruction "{instructions}" corresponds to no data!' --> 254 raise ValueError(msg) 255 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) **ValueError: Instruction "validation" corresponds to no data!** ```` Looks like the data is not being loaded. Any advice would be appreciated. Thanks! ### Steps to reproduce the bug Run all cells of Chapter 6 notebook. ### Expected behavior Data should load correctly without any errors. ### Environment info - `datasets` version: 2.17.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.18 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/34660389?v=4", "events_url": "https://api.github.com/users/hariravichandran/events{/privacy}", "followers_url": "https://api.github.com/users/hariravichandran/followers", "following_url": "https://api.github.com/users/hariravichandran/following{/other_user}", "gists_url": "https://api.github.com/users/hariravichandran/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hariravichandran", "id": 34660389, "login": "hariravichandran", "node_id": "MDQ6VXNlcjM0NjYwMzg5", "organizations_url": "https://api.github.com/users/hariravichandran/orgs", "received_events_url": "https://api.github.com/users/hariravichandran/received_events", "repos_url": "https://api.github.com/users/hariravichandran/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hariravichandran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hariravichandran/subscriptions", "type": "User", "url": "https://api.github.com/users/hariravichandran" }
https://api.github.com/repos/huggingface/datasets/issues/6668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6668/timeline
open
false
6,668
null
null
null
false
2,137,769,552
https://api.github.com/repos/huggingface/datasets/issues/6667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6667/events
[]
null
2024-02-23T09:10:00Z
[]
https://github.com/huggingface/datasets/issues/6667
NONE
null
null
null
[ "you can try: pip install datasets==2.16.1" ]
Default config for squad is incorrect
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6667/reactions" }
I_kwDODunzps5_a8ZQ
null
2024-02-16T02:36:55Z
https://api.github.com/repos/huggingface/datasets/issues/6667/comments
### Describe the bug If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say; ValueError: Couldn't find cache for squad for config 'default' Available configs in the cache: ['plain_text'] ### Steps to reproduce the bug 1. export HF_DATASETS_OFFLINE=0 2. load_dataset("squad") 3. export HF_DATASETS_OFFLINE=1 4. load_dataset("squad") ### Expected behavior We should change the config_name I guess? ### Environment info linux, latest version of datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/22651617?v=4", "events_url": "https://api.github.com/users/kiddyboots216/events{/privacy}", "followers_url": "https://api.github.com/users/kiddyboots216/followers", "following_url": "https://api.github.com/users/kiddyboots216/following{/other_user}", "gists_url": "https://api.github.com/users/kiddyboots216/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kiddyboots216", "id": 22651617, "login": "kiddyboots216", "node_id": "MDQ6VXNlcjIyNjUxNjE3", "organizations_url": "https://api.github.com/users/kiddyboots216/orgs", "received_events_url": "https://api.github.com/users/kiddyboots216/received_events", "repos_url": "https://api.github.com/users/kiddyboots216/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kiddyboots216/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiddyboots216/subscriptions", "type": "User", "url": "https://api.github.com/users/kiddyboots216" }
https://api.github.com/repos/huggingface/datasets/issues/6667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6667/timeline
open
false
6,667
null
null
null
false
2,136,136,425
https://api.github.com/repos/huggingface/datasets/issues/6665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6665/events
[]
null
2024-03-01T16:02:46Z
[]
https://github.com/huggingface/datasets/pull/6665
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6665). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004968 / 0.011353 (-0.006385) | 0.003732 / 0.011008 (-0.007276) | 0.063672 / 0.038508 (0.025164) | 0.027066 / 0.023109 (0.003957) | 0.253306 / 0.275898 (-0.022592) | 0.283382 / 0.323480 (-0.040098) | 0.004217 / 0.007986 (-0.003768) | 0.002865 / 0.004328 (-0.001464) | 0.048672 / 0.004250 (0.044421) | 0.040740 / 0.037052 (0.003688) | 0.271848 / 0.258489 (0.013359) | 0.293162 / 0.293841 (-0.000679) | 0.027410 / 0.128546 (-0.101136) | 0.010605 / 0.075646 (-0.065042) | 0.210545 / 0.419271 (-0.208726) | 0.036085 / 0.043533 (-0.007447) | 0.259807 / 0.255139 (0.004668) | 0.274056 / 0.283200 (-0.009144) | 0.018812 / 0.141683 (-0.122871) | 1.116687 / 1.452155 (-0.335468) | 1.164276 / 1.492716 (-0.328440) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092874 / 0.018006 (0.074868) | 0.355897 / 0.000490 (0.355407) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018461 / 0.037411 (-0.018950) | 0.062061 / 0.014526 (0.047535) | 0.072353 / 0.176557 (-0.104203) | 0.119162 / 0.737135 (-0.617974) | 0.082974 / 0.296338 (-0.213364) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291631 / 0.215209 (0.076422) | 2.861495 / 2.077655 (0.783841) | 1.496753 / 1.504120 (-0.007367) | 1.371164 / 1.541195 (-0.170031) | 1.415473 / 1.468490 (-0.053018) | 0.566778 / 4.584777 (-4.017999) | 2.376209 / 3.745712 (-1.369503) | 2.812326 / 5.269862 (-2.457535) | 1.765640 / 4.565676 (-2.800037) | 0.063274 / 0.424275 (-0.361001) | 0.004933 / 0.007607 (-0.002674) | 0.342345 / 0.226044 (0.116301) | 3.407487 / 2.268929 (1.138558) | 1.856646 / 55.444624 (-53.587978) | 1.590284 / 6.876477 (-5.286193) | 1.610068 / 2.142072 (-0.532004) | 0.656007 / 4.805227 (-4.149220) | 0.118310 / 6.500664 (-6.382354) | 0.042596 / 0.075469 (-0.032873) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.991392 / 1.841788 (-0.850395) | 11.612397 / 8.074308 (3.538089) | 9.627836 / 10.191392 (-0.563556) | 0.130575 / 0.680424 (-0.549848) | 0.014152 / 0.534201 (-0.520049) | 0.289736 / 0.579283 (-0.289548) | 0.260041 / 0.434364 (-0.174323) | 0.339730 / 0.540337 (-0.200608) | 0.447529 / 1.386936 (-0.939407) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005315 / 0.011353 (-0.006038) | 0.003955 / 0.011008 (-0.007053) | 0.049618 / 0.038508 (0.011110) | 0.030404 / 0.023109 (0.007295) | 0.258727 / 0.275898 (-0.017171) | 0.282020 / 0.323480 (-0.041460) | 0.004356 / 0.007986 (-0.003629) | 0.002866 / 0.004328 (-0.001462) | 0.049122 / 0.004250 (0.044872) | 0.045534 / 0.037052 (0.008482) | 0.269560 / 0.258489 (0.011071) | 0.301225 / 0.293841 (0.007384) | 0.029786 / 0.128546 (-0.098761) | 0.010433 / 0.075646 (-0.065213) | 0.058222 / 0.419271 (-0.361049) | 0.052968 / 0.043533 (0.009435) | 0.256605 / 0.255139 (0.001467) | 0.279899 / 0.283200 (-0.003300) | 0.018233 / 0.141683 (-0.123450) | 1.164060 / 1.452155 (-0.288095) | 1.218049 / 1.492716 (-0.274667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093646 / 0.018006 (0.075639) | 0.288804 / 0.000490 (0.288314) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022193 / 0.037411 (-0.015219) | 0.075507 / 0.014526 (0.060981) | 0.086091 / 0.176557 (-0.090465) | 0.127433 / 0.737135 (-0.609703) | 0.087064 / 0.296338 (-0.209274) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292459 / 0.215209 (0.077250) | 2.842430 / 2.077655 (0.764776) | 1.505824 / 1.504120 (0.001704) | 1.377052 / 1.541195 (-0.164143) | 1.408757 / 1.468490 (-0.059733) | 0.571705 / 4.584777 (-4.013072) | 2.459798 / 3.745712 (-1.285914) | 2.714826 / 5.269862 (-2.555035) | 1.782064 / 4.565676 (-2.783613) | 0.063113 / 0.424275 (-0.361162) | 0.005099 / 0.007607 (-0.002509) | 0.343624 / 0.226044 (0.117579) | 3.415806 / 2.268929 (1.146878) | 1.853253 / 55.444624 (-53.591371) | 1.584392 / 6.876477 (-5.292084) | 1.720384 / 2.142072 (-0.421689) | 0.646637 / 4.805227 (-4.158590) | 0.118072 / 6.500664 (-6.382593) | 0.041362 / 0.075469 (-0.034107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020086 / 1.841788 (-0.821701) | 12.303980 / 8.074308 (4.229672) | 10.322869 / 10.191392 (0.131477) | 0.140959 / 0.680424 (-0.539465) | 0.015372 / 0.534201 (-0.518829) | 0.288552 / 0.579283 (-0.290731) | 0.278243 / 0.434364 (-0.156121) | 0.328399 / 0.540337 (-0.211939) | 0.433618 / 1.386936 (-0.953318) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9469092d88ff7bb4d3f7fe6c2de0109ca458b5da \"CML watermark\")\n" ]
Allow SplitDict setitem to replace existing SplitInfo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6665/reactions" }
PR_kwDODunzps5m9JgW
{ "diff_url": "https://github.com/huggingface/datasets/pull/6665.diff", "html_url": "https://github.com/huggingface/datasets/pull/6665", "merged_at": "2024-03-01T15:56:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6665.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6665" }
2024-02-15T10:17:08Z
https://api.github.com/repos/huggingface/datasets/issues/6665/comments
Fix this code provided by @clefourrier ```python import datasets import os token = os.getenv("TOKEN") results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD) results["test"] = datasets.Dataset.from_list([row for row in results["test"] if row["model"] != "StateFlow"]) results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test") ``` ``` ValueError Traceback (most recent call last) Cell In[43], line 1 ----> 1 results["test"].push_to_hub("gaia-benchmark/results_public", "2023", token=token, split="test") File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/arrow_dataset.py:5498, in Dataset.push_to_hub(self, repo_id, config_name, split, private, token, branch, max_shard_size, num_shards, embed_external_files) 5496 repo_info.dataset_size = (repo_info.dataset_size or 0) + dataset_nbytes 5497 repo_info.size_in_bytes = repo_info.download_size + repo_info.dataset_size -> 5498 repo_info.splits[split] = SplitInfo( 5499 split, num_bytes=dataset_nbytes, num_examples=len(self), dataset_name=dataset_name 5500 ) 5501 info_to_dump = repo_info 5502 # create the metadata configs if it was uploaded with push_to_hub before metadata configs existed File ~/miniconda3/envs/default310/lib/python3.10/site-packages/datasets/splits.py:541, in SplitDict.__setitem__(self, key, value) 539 raise ValueError(f"Cannot add elem. (key mismatch: '{key}' != '{value.name}')") 540 if key in self: --> 541 raise ValueError(f"Split {key} already present") 542 super().__setitem__(key, value) ValueError: Split test already present ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6665/timeline
closed
false
6,665
null
2024-03-01T15:56:38Z
null
true
2,135,483,978
https://api.github.com/repos/huggingface/datasets/issues/6664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6664/events
[]
null
2024-02-16T14:02:39Z
[]
https://github.com/huggingface/datasets/pull/6664
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6664). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Hi! We can't revert this as the \"reverted\" implementation has quadratic time complexity. Instead, let's fix it:\r\n\r\nI agree, but it's the implementation we have had so far. Why don't we:\r\n1. Release a hotfix ASAP (since would be doing a revert, we know it works as before) so people can continue using this library fine since AFAIU right now mostly writing examples for people is broken.\r\n2. Then, focus on still applying the performance improvement and release again", "The fix is straightforward, so one patch release (after this PR is merged) is enough.\r\n\r\nBtw, let's also add a test to `tests/test_arrow_writer.py` to avoid this issue in the future.", "> Btw, let's also add a test to tests/test_arrow_writer.py to avoid this issue in the future.\r\n\r\nWould you mind adding such test, as you're more familiar with the codebase?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005083 / 0.011353 (-0.006270) | 0.003697 / 0.011008 (-0.007311) | 0.063302 / 0.038508 (0.024794) | 0.028866 / 0.023109 (0.005757) | 0.249987 / 0.275898 (-0.025911) | 0.270803 / 0.323480 (-0.052677) | 0.004096 / 0.007986 (-0.003890) | 0.002752 / 0.004328 (-0.001577) | 0.049156 / 0.004250 (0.044906) | 0.042936 / 0.037052 (0.005884) | 0.266907 / 0.258489 (0.008418) | 0.291462 / 0.293841 (-0.002379) | 0.027703 / 0.128546 (-0.100844) | 0.011006 / 0.075646 (-0.064641) | 0.206238 / 0.419271 (-0.213033) | 0.035446 / 0.043533 (-0.008087) | 0.248923 / 0.255139 (-0.006216) | 0.264141 / 0.283200 (-0.019058) | 0.017545 / 0.141683 (-0.124138) | 1.157145 / 1.452155 (-0.295009) | 1.199007 / 1.492716 (-0.293710) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092741 / 0.018006 (0.074734) | 0.299057 / 0.000490 (0.298567) | 0.000211 / 0.000200 (0.000011) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017936 / 0.037411 (-0.019475) | 0.061552 / 0.014526 (0.047026) | 0.072938 / 0.176557 (-0.103618) | 0.118192 / 0.737135 (-0.618944) | 0.074589 / 0.296338 (-0.221750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287186 / 0.215209 (0.071977) | 2.795694 / 2.077655 (0.718039) | 1.474386 / 1.504120 (-0.029734) | 1.359065 / 1.541195 (-0.182130) | 1.375295 / 1.468490 (-0.093196) | 0.569448 / 4.584777 (-4.015329) | 2.374428 / 3.745712 (-1.371284) | 2.770198 / 5.269862 (-2.499663) | 1.716346 / 4.565676 (-2.849330) | 0.063173 / 0.424275 (-0.361102) | 0.005031 / 0.007607 (-0.002576) | 0.333197 / 0.226044 (0.107153) | 3.271739 / 2.268929 (1.002811) | 1.826406 / 55.444624 (-53.618218) | 1.554537 / 6.876477 (-5.321939) | 1.565927 / 2.142072 (-0.576146) | 0.649796 / 4.805227 (-4.155431) | 0.118371 / 6.500664 (-6.382293) | 0.042536 / 0.075469 (-0.032933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969882 / 1.841788 (-0.871906) | 11.638201 / 8.074308 (3.563893) | 9.759370 / 10.191392 (-0.432022) | 0.128069 / 0.680424 (-0.552355) | 0.013493 / 0.534201 (-0.520708) | 0.287324 / 0.579283 (-0.291959) | 0.267542 / 0.434364 (-0.166821) | 0.320072 / 0.540337 (-0.220265) | 0.421132 / 1.386936 (-0.965804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005679 / 0.011353 (-0.005674) | 0.003746 / 0.011008 (-0.007262) | 0.050149 / 0.038508 (0.011641) | 0.034382 / 0.023109 (0.011273) | 0.289802 / 0.275898 (0.013904) | 0.314993 / 0.323480 (-0.008487) | 0.004488 / 0.007986 (-0.003498) | 0.002786 / 0.004328 (-0.001542) | 0.047987 / 0.004250 (0.043737) | 0.046589 / 0.037052 (0.009537) | 0.301420 / 0.258489 (0.042931) | 0.335384 / 0.293841 (0.041543) | 0.050701 / 0.128546 (-0.077845) | 0.010987 / 0.075646 (-0.064660) | 0.058292 / 0.419271 (-0.360979) | 0.033973 / 0.043533 (-0.009560) | 0.288923 / 0.255139 (0.033784) | 0.306263 / 0.283200 (0.023064) | 0.018856 / 0.141683 (-0.122827) | 1.160721 / 1.452155 (-0.291433) | 1.208151 / 1.492716 (-0.284565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092633 / 0.018006 (0.074626) | 0.300353 / 0.000490 (0.299864) | 0.000219 / 0.000200 (0.000019) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022257 / 0.037411 (-0.015154) | 0.075417 / 0.014526 (0.060892) | 0.087289 / 0.176557 (-0.089268) | 0.125416 / 0.737135 (-0.611720) | 0.088751 / 0.296338 (-0.207588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286477 / 0.215209 (0.071268) | 2.801931 / 2.077655 (0.724277) | 1.553034 / 1.504120 (0.048914) | 1.426152 / 1.541195 (-0.115043) | 1.443824 / 1.468490 (-0.024666) | 0.563298 / 4.584777 (-4.021479) | 2.428968 / 3.745712 (-1.316744) | 2.685964 / 5.269862 (-2.583897) | 1.752304 / 4.565676 (-2.813372) | 0.064174 / 0.424275 (-0.360101) | 0.005079 / 0.007607 (-0.002528) | 0.344899 / 0.226044 (0.118855) | 3.372528 / 2.268929 (1.103600) | 1.900723 / 55.444624 (-53.543901) | 1.623721 / 6.876477 (-5.252756) | 1.781009 / 2.142072 (-0.361064) | 0.655229 / 4.805227 (-4.149998) | 0.116050 / 6.500664 (-6.384614) | 0.040374 / 0.075469 (-0.035095) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004714 / 1.841788 (-0.837074) | 12.108179 / 8.074308 (4.033871) | 10.233447 / 10.191392 (0.042055) | 0.141438 / 0.680424 (-0.538986) | 0.015387 / 0.534201 (-0.518814) | 0.288068 / 0.579283 (-0.291216) | 0.277025 / 0.434364 (-0.157339) | 0.331714 / 0.540337 (-0.208623) | 0.424209 / 1.386936 (-0.962727) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bdebf1922663c30744efb8869c86b28f102b84dd \"CML watermark\")\n" ]
Revert the changes in `arrow_writer.py` from #6636
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6664/reactions" }
PR_kwDODunzps5m67g0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6664.diff", "html_url": "https://github.com/huggingface/datasets/pull/6664", "merged_at": "2024-02-16T02:31:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/6664.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6664" }
2024-02-15T01:47:33Z
https://api.github.com/repos/huggingface/datasets/issues/6664/comments
#6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663. Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column.
{ "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bryant1410", "id": 3905501, "login": "bryant1410", "node_id": "MDQ6VXNlcjM5MDU1MDE=", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "repos_url": "https://api.github.com/users/bryant1410/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "type": "User", "url": "https://api.github.com/users/bryant1410" }
https://api.github.com/repos/huggingface/datasets/issues/6664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6664/timeline
closed
false
6,664
null
2024-02-16T02:31:11Z
null
true
2,135,480,811
https://api.github.com/repos/huggingface/datasets/issues/6663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6663/events
[]
null
2024-02-16T09:25:00Z
[]
https://github.com/huggingface/datasets/issues/6663
CONTRIBUTOR
completed
null
null
[ "Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.", "> Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.\r\n\r\nI feel that'd be good, but it'd be great to release a hotfix ASAP (a revert is a fast thing to do) so people can continue using this library and then focus on still applying the improvement.", "Fixed by #6664 " ]
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6663/reactions" }
I_kwDODunzps5_SNnr
null
2024-02-15T01:43:27Z
https://api.github.com/repos/huggingface/datasets/issues/6663/comments
### Describe the bug `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well. ### Steps to reproduce the bug Try to do `write_batch` with anything that has many columns, and it's likely to break. ### Expected behavior I expect these functions to work, instead of it trying to cast a column to its incorrect type. ### Environment info - `datasets` version: 2.17.0 - Platform: Linux-5.15.0-1040-aws-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.19.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bryant1410", "id": 3905501, "login": "bryant1410", "node_id": "MDQ6VXNlcjM5MDU1MDE=", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "repos_url": "https://api.github.com/users/bryant1410/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "type": "User", "url": "https://api.github.com/users/bryant1410" }
https://api.github.com/repos/huggingface/datasets/issues/6663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6663/timeline
closed
false
6,663
null
2024-02-16T09:25:00Z
null
false
2,132,425,812
https://api.github.com/repos/huggingface/datasets/issues/6662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6662/events
[]
null
2024-03-01T17:49:48Z
[]
https://github.com/huggingface/datasets/pull/6662
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6662). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003666 / 0.011008 (-0.007342) | 0.062660 / 0.038508 (0.024152) | 0.028656 / 0.023109 (0.005546) | 0.249601 / 0.275898 (-0.026297) | 0.265745 / 0.323480 (-0.057735) | 0.002935 / 0.007986 (-0.005051) | 0.002606 / 0.004328 (-0.001723) | 0.048774 / 0.004250 (0.044523) | 0.043643 / 0.037052 (0.006591) | 0.263114 / 0.258489 (0.004625) | 0.284596 / 0.293841 (-0.009245) | 0.027818 / 0.128546 (-0.100728) | 0.010726 / 0.075646 (-0.064921) | 0.205900 / 0.419271 (-0.213371) | 0.035646 / 0.043533 (-0.007887) | 0.245599 / 0.255139 (-0.009540) | 0.267706 / 0.283200 (-0.015493) | 0.018441 / 0.141683 (-0.123242) | 1.143365 / 1.452155 (-0.308790) | 1.191823 / 1.492716 (-0.300893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089703 / 0.018006 (0.071696) | 0.298073 / 0.000490 (0.297583) | 0.000209 / 0.000200 (0.000009) | 0.000042 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018068 / 0.037411 (-0.019343) | 0.061416 / 0.014526 (0.046890) | 0.075989 / 0.176557 (-0.100567) | 0.120765 / 0.737135 (-0.616370) | 0.075476 / 0.296338 (-0.220863) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284043 / 0.215209 (0.068834) | 2.770282 / 2.077655 (0.692627) | 1.473040 / 1.504120 (-0.031080) | 1.349064 / 1.541195 (-0.192131) | 1.362783 / 1.468490 (-0.105708) | 0.560765 / 4.584777 (-4.024012) | 2.357731 / 3.745712 (-1.387981) | 2.745771 / 5.269862 (-2.524090) | 1.726764 / 4.565676 (-2.838913) | 0.061212 / 0.424275 (-0.363063) | 0.004902 / 0.007607 (-0.002705) | 0.336963 / 0.226044 (0.110919) | 3.324519 / 2.268929 (1.055591) | 1.825826 / 55.444624 (-53.618798) | 1.548811 / 6.876477 (-5.327666) | 1.570618 / 2.142072 (-0.571454) | 0.642411 / 4.805227 (-4.162816) | 0.116068 / 6.500664 (-6.384596) | 0.042433 / 0.075469 (-0.033036) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988402 / 1.841788 (-0.853386) | 11.509601 / 8.074308 (3.435293) | 9.555338 / 10.191392 (-0.636054) | 0.138728 / 0.680424 (-0.541696) | 0.014107 / 0.534201 (-0.520094) | 0.285465 / 0.579283 (-0.293818) | 0.263086 / 0.434364 (-0.171278) | 0.327469 / 0.540337 (-0.212869) | 0.444799 / 1.386936 (-0.942137) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005359 / 0.011353 (-0.005993) | 0.003605 / 0.011008 (-0.007403) | 0.049734 / 0.038508 (0.011226) | 0.029792 / 0.023109 (0.006683) | 0.276384 / 0.275898 (0.000486) | 0.297915 / 0.323480 (-0.025564) | 0.004949 / 0.007986 (-0.003036) | 0.002713 / 0.004328 (-0.001616) | 0.049499 / 0.004250 (0.045249) | 0.044969 / 0.037052 (0.007917) | 0.284558 / 0.258489 (0.026069) | 0.315170 / 0.293841 (0.021329) | 0.029457 / 0.128546 (-0.099089) | 0.010573 / 0.075646 (-0.065073) | 0.058191 / 0.419271 (-0.361080) | 0.051461 / 0.043533 (0.007928) | 0.270744 / 0.255139 (0.015605) | 0.291664 / 0.283200 (0.008465) | 0.018607 / 0.141683 (-0.123076) | 1.158799 / 1.452155 (-0.293355) | 1.210509 / 1.492716 (-0.282208) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.090277 / 0.018006 (0.072270) | 0.298748 / 0.000490 (0.298258) | 0.000228 / 0.000200 (0.000028) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021850 / 0.037411 (-0.015561) | 0.075433 / 0.014526 (0.060907) | 0.087171 / 0.176557 (-0.089386) | 0.125828 / 0.737135 (-0.611308) | 0.090343 / 0.296338 (-0.205996) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297267 / 0.215209 (0.082058) | 2.865234 / 2.077655 (0.787579) | 1.595024 / 1.504120 (0.090904) | 1.476100 / 1.541195 (-0.065094) | 1.494896 / 1.468490 (0.026406) | 0.569086 / 4.584777 (-4.015691) | 2.401976 / 3.745712 (-1.343736) | 2.676091 / 5.269862 (-2.593771) | 1.742087 / 4.565676 (-2.823590) | 0.065161 / 0.424275 (-0.359114) | 0.005006 / 0.007607 (-0.002602) | 0.342302 / 0.226044 (0.116257) | 3.450571 / 2.268929 (1.181643) | 1.928754 / 55.444624 (-53.515871) | 1.672823 / 6.876477 (-5.203653) | 1.798830 / 2.142072 (-0.343243) | 0.648730 / 4.805227 (-4.156498) | 0.116433 / 6.500664 (-6.384231) | 0.040683 / 0.075469 (-0.034786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006158 / 1.841788 (-0.835630) | 12.200093 / 8.074308 (4.125785) | 10.180691 / 10.191392 (-0.010701) | 0.146620 / 0.680424 (-0.533804) | 0.015621 / 0.534201 (-0.518580) | 0.287956 / 0.579283 (-0.291327) | 0.277231 / 0.434364 (-0.157133) | 0.323815 / 0.540337 (-0.216522) | 0.429655 / 1.386936 (-0.957281) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273e16f9a21d6eaba1fd40fbdf0c05e66642c5a7 \"CML watermark\")\n" ]
fix: show correct package name to install biopython
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6662/reactions" }
PR_kwDODunzps5mwgKP
{ "diff_url": "https://github.com/huggingface/datasets/pull/6662.diff", "html_url": "https://github.com/huggingface/datasets/pull/6662", "merged_at": "2024-03-01T17:43:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/6662.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6662" }
2024-02-13T14:15:04Z
https://api.github.com/repos/huggingface/datasets/issues/6662/comments
When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error: ``` >>> from datasets import load_dataset >>> dataset = load_dataset("InstaDeepAI/multi_species_genomes") /home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py:1454: FutureWarning: The repository for InstaDeepAI/multi_species_genomes contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/InstaDeepAI/multi_species_genomes You can avoid this message in future by passing the argument `trust_remote_code=True`. Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`. warnings.warn( Downloading builder script: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.51k/7.51k [00:00<00:00, 7.67MB/s] Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.2k/17.2k [00:00<00:00, 11.0MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2548, in load_dataset builder_instance = load_dataset_builder( File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 2220, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1871, in dataset_module_factory raise e1 from None File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1844, in dataset_module_factory ).get_module() File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 1466, in get_module local_imports = _download_additional_modules( File "/home/j.vangoey/.pyenv/versions/multi_species_genomes/lib/python3.10/site-packages/datasets/load.py", line 346, in _download_additional_modules raise ImportError( ImportError: To be able to use InstaDeepAI/multi_species_genomes, you need to install the following dependency: Bio. Please install it using 'pip install Bio' for instance. >>> ``` `Bio` comes from the `biopython` package that can be installed with `pip install biopython`, not with `pip install Bio` as suggested. This PR adds special logic to show the correct package name in the error message of ` _download_additional_modules`, similarly as is done for `sklearn` / `scikit-learn` already. There are more packages where importable module name differs from the PyPI package name, so this could be made more generic, like: ``` # Mapping of importable module names to their PyPI package names package_map = { "sklearn": "scikit-learn", "Bio": "biopython", "PIL": "Pillow", "bs4": "beautifulsoup4" } for module_name, pypi_name in package_map.items(): if module_name in needs_to_be_installed.keys(): needs_to_be_installed[module_name] = pypi_name ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/59344?v=4", "events_url": "https://api.github.com/users/BioGeek/events{/privacy}", "followers_url": "https://api.github.com/users/BioGeek/followers", "following_url": "https://api.github.com/users/BioGeek/following{/other_user}", "gists_url": "https://api.github.com/users/BioGeek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BioGeek", "id": 59344, "login": "BioGeek", "node_id": "MDQ6VXNlcjU5MzQ0", "organizations_url": "https://api.github.com/users/BioGeek/orgs", "received_events_url": "https://api.github.com/users/BioGeek/received_events", "repos_url": "https://api.github.com/users/BioGeek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BioGeek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BioGeek/subscriptions", "type": "User", "url": "https://api.github.com/users/BioGeek" }
https://api.github.com/repos/huggingface/datasets/issues/6662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6662/timeline
closed
false
6,662
null
2024-03-01T17:43:39Z
null
true
2,132,296,267
https://api.github.com/repos/huggingface/datasets/issues/6661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6661/events
[]
null
2024-02-25T16:37:54Z
[]
https://github.com/huggingface/datasets/issues/6661
NONE
completed
null
null
[ "Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert the `import os; os.kill(os.getpid(), 9)` cell between `!pip install -U datasets` and `import datasets` to do the same programmatically.", "One possible cause might be the one pointed out by @mariosasko above, and you get the following warning on Colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly installed versions.\r\n```\r\n\r\nOn the other hand, if the old version of `pyarrow` is not previously imported (before the installation of `datasets`), the reported issue here is not reproducible: `datasets` can be installed, imported and used on Colab.", "Duplicate of:\r\n- #5923", "Google Colab now pre-installs PyArrow 14.0.2, making this issue unlikely to happen. So, I'm unpinning it." ]
Import error on Google Colab
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6661/reactions" }
I_kwDODunzps5_GEJL
null
2024-02-13T13:12:40Z
https://api.github.com/repos/huggingface/datasets/issues/6661/comments
### Describe the bug Cannot be imported on Google Colab, the import throws the following error: ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug 1. `! pip install -U datasets` 2. `import datasets` ### Expected behavior Should be possible to use the library ### Environment info - `datasets` version: 2.17.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/16103566?v=4", "events_url": "https://api.github.com/users/kithogue/events{/privacy}", "followers_url": "https://api.github.com/users/kithogue/followers", "following_url": "https://api.github.com/users/kithogue/following{/other_user}", "gists_url": "https://api.github.com/users/kithogue/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kithogue", "id": 16103566, "login": "kithogue", "node_id": "MDQ6VXNlcjE2MTAzNTY2", "organizations_url": "https://api.github.com/users/kithogue/orgs", "received_events_url": "https://api.github.com/users/kithogue/received_events", "repos_url": "https://api.github.com/users/kithogue/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kithogue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kithogue/subscriptions", "type": "User", "url": "https://api.github.com/users/kithogue" }
https://api.github.com/repos/huggingface/datasets/issues/6661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6661/timeline
closed
false
6,661
null
2024-02-14T08:04:47Z
null
false
2,131,977,011
https://api.github.com/repos/huggingface/datasets/issues/6660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6660/events
[]
null
2024-03-01T19:01:57Z
[]
https://github.com/huggingface/datasets/pull/6660
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6660). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004995 / 0.011353 (-0.006357) | 0.003230 / 0.011008 (-0.007779) | 0.062836 / 0.038508 (0.024328) | 0.026684 / 0.023109 (0.003575) | 0.249286 / 0.275898 (-0.026612) | 0.272936 / 0.323480 (-0.050544) | 0.003952 / 0.007986 (-0.004033) | 0.002708 / 0.004328 (-0.001620) | 0.055346 / 0.004250 (0.051095) | 0.040023 / 0.037052 (0.002971) | 0.263350 / 0.258489 (0.004860) | 0.294727 / 0.293841 (0.000886) | 0.027280 / 0.128546 (-0.101266) | 0.010273 / 0.075646 (-0.065373) | 0.206035 / 0.419271 (-0.213236) | 0.035715 / 0.043533 (-0.007818) | 0.255474 / 0.255139 (0.000335) | 0.273960 / 0.283200 (-0.009240) | 0.018563 / 0.141683 (-0.123120) | 1.140013 / 1.452155 (-0.312142) | 1.188655 / 1.492716 (-0.304062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091895 / 0.018006 (0.073888) | 0.284621 / 0.000490 (0.284131) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018610 / 0.037411 (-0.018801) | 0.061554 / 0.014526 (0.047028) | 0.072454 / 0.176557 (-0.104103) | 0.120283 / 0.737135 (-0.616853) | 0.073744 / 0.296338 (-0.222595) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288850 / 0.215209 (0.073641) | 2.836761 / 2.077655 (0.759107) | 1.533407 / 1.504120 (0.029287) | 1.409394 / 1.541195 (-0.131801) | 1.421667 / 1.468490 (-0.046823) | 0.566183 / 4.584777 (-4.018594) | 2.390670 / 3.745712 (-1.355043) | 2.732031 / 5.269862 (-2.537831) | 1.730886 / 4.565676 (-2.834791) | 0.064280 / 0.424275 (-0.359995) | 0.004959 / 0.007607 (-0.002648) | 0.342664 / 0.226044 (0.116619) | 3.398969 / 2.268929 (1.130040) | 1.887354 / 55.444624 (-53.557270) | 1.572955 / 6.876477 (-5.303522) | 1.596179 / 2.142072 (-0.545894) | 0.645844 / 4.805227 (-4.159383) | 0.118050 / 6.500664 (-6.382614) | 0.042158 / 0.075469 (-0.033311) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.959170 / 1.841788 (-0.882617) | 11.276491 / 8.074308 (3.202183) | 9.471198 / 10.191392 (-0.720194) | 0.128346 / 0.680424 (-0.552078) | 0.013851 / 0.534201 (-0.520350) | 0.286125 / 0.579283 (-0.293158) | 0.266915 / 0.434364 (-0.167449) | 0.332811 / 0.540337 (-0.207526) | 0.444780 / 1.386936 (-0.942156) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005687) | 0.003267 / 0.011008 (-0.007741) | 0.050238 / 0.038508 (0.011730) | 0.032882 / 0.023109 (0.009773) | 0.269320 / 0.275898 (-0.006578) | 0.293140 / 0.323480 (-0.030340) | 0.004127 / 0.007986 (-0.003858) | 0.002728 / 0.004328 (-0.001601) | 0.049360 / 0.004250 (0.045109) | 0.043764 / 0.037052 (0.006712) | 0.291211 / 0.258489 (0.032722) | 0.319745 / 0.293841 (0.025904) | 0.029371 / 0.128546 (-0.099175) | 0.010212 / 0.075646 (-0.065434) | 0.059064 / 0.419271 (-0.360207) | 0.051148 / 0.043533 (0.007615) | 0.276698 / 0.255139 (0.021559) | 0.292329 / 0.283200 (0.009129) | 0.018349 / 0.141683 (-0.123334) | 1.150816 / 1.452155 (-0.301338) | 1.184292 / 1.492716 (-0.308425) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091646 / 0.018006 (0.073640) | 0.301737 / 0.000490 (0.301247) | 0.000214 / 0.000200 (0.000014) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021529 / 0.037411 (-0.015883) | 0.075596 / 0.014526 (0.061070) | 0.087912 / 0.176557 (-0.088645) | 0.125240 / 0.737135 (-0.611895) | 0.088035 / 0.296338 (-0.208303) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305097 / 0.215209 (0.089888) | 2.979612 / 2.077655 (0.901957) | 1.647009 / 1.504120 (0.142889) | 1.520251 / 1.541195 (-0.020944) | 1.513361 / 1.468490 (0.044870) | 0.571733 / 4.584777 (-4.013044) | 2.415587 / 3.745712 (-1.330125) | 2.615983 / 5.269862 (-2.653879) | 1.732637 / 4.565676 (-2.833039) | 0.062830 / 0.424275 (-0.361445) | 0.004972 / 0.007607 (-0.002635) | 0.348559 / 0.226044 (0.122515) | 3.450567 / 2.268929 (1.181639) | 1.970743 / 55.444624 (-53.473882) | 1.702232 / 6.876477 (-5.174245) | 1.799592 / 2.142072 (-0.342480) | 0.649477 / 4.805227 (-4.155751) | 0.115940 / 6.500664 (-6.384724) | 0.040364 / 0.075469 (-0.035105) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.000014 / 1.841788 (-0.841773) | 11.937886 / 8.074308 (3.863578) | 10.169478 / 10.191392 (-0.021914) | 0.153359 / 0.680424 (-0.527064) | 0.015205 / 0.534201 (-0.518996) | 0.287812 / 0.579283 (-0.291471) | 0.278688 / 0.434364 (-0.155676) | 0.322831 / 0.540337 (-0.217507) | 0.425631 / 1.386936 (-0.961305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6e176efbed29374e7c2cd33da64aeeae3c11ca0f \"CML watermark\")\n" ]
Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/6660/reactions" }
PR_kwDODunzps5mu9wU
{ "diff_url": "https://github.com/huggingface/datasets/pull/6660.diff", "html_url": "https://github.com/huggingface/datasets/pull/6660", "merged_at": "2024-03-01T18:52:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6660.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6660" }
2024-02-13T10:24:33Z
https://api.github.com/repos/huggingface/datasets/issues/6660/comments
This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example: ```python from datasets import Dataset, Sequence, Value, Features def gen(): for i in range(100): yield {'seq': list(range(i, i + 20))} ds = Dataset.from_generator(gen, features=Features({'seq': Sequence(feature=Value(dtype='uint16'), length=-1)})) ds.set_format('torch') print(ds[0]) ``` This code snippet triggers the following error due to the inability to convert numpy.uint16 arrays to a PyTorch-supported format: ``` TypeError: can't convert np.ndarray of type numpy.uint16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool. ``` This PR introduces an automatic mechanism to convert np.uint16 and np.uint32 datatypes to np.int64 for seamless compatibility with PyTorch formats, simplifying workflows and improving developer experience by eliminating the need for manual conversion handling.
{ "avatar_url": "https://avatars.githubusercontent.com/u/23399590?v=4", "events_url": "https://api.github.com/users/mohalisad/events{/privacy}", "followers_url": "https://api.github.com/users/mohalisad/followers", "following_url": "https://api.github.com/users/mohalisad/following{/other_user}", "gists_url": "https://api.github.com/users/mohalisad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mohalisad", "id": 23399590, "login": "mohalisad", "node_id": "MDQ6VXNlcjIzMzk5NTkw", "organizations_url": "https://api.github.com/users/mohalisad/orgs", "received_events_url": "https://api.github.com/users/mohalisad/received_events", "repos_url": "https://api.github.com/users/mohalisad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mohalisad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mohalisad/subscriptions", "type": "User", "url": "https://api.github.com/users/mohalisad" }
https://api.github.com/repos/huggingface/datasets/issues/6660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6660/timeline
closed
false
6,660
null
2024-03-01T18:52:37Z
null
true
2,129,229,810
https://api.github.com/repos/huggingface/datasets/issues/6659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6659/events
[]
null
2024-03-01T17:51:50Z
[]
https://github.com/huggingface/datasets/pull/6659
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6659). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Can someone check this out?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005008 / 0.011353 (-0.006345) | 0.003267 / 0.011008 (-0.007741) | 0.064140 / 0.038508 (0.025632) | 0.027419 / 0.023109 (0.004309) | 0.246692 / 0.275898 (-0.029206) | 0.271303 / 0.323480 (-0.052177) | 0.004127 / 0.007986 (-0.003859) | 0.002698 / 0.004328 (-0.001631) | 0.050415 / 0.004250 (0.046165) | 0.040323 / 0.037052 (0.003271) | 0.265738 / 0.258489 (0.007249) | 0.291556 / 0.293841 (-0.002285) | 0.027924 / 0.128546 (-0.100622) | 0.010206 / 0.075646 (-0.065441) | 0.207106 / 0.419271 (-0.212165) | 0.036087 / 0.043533 (-0.007446) | 0.250412 / 0.255139 (-0.004727) | 0.269014 / 0.283200 (-0.014186) | 0.018102 / 0.141683 (-0.123581) | 1.135137 / 1.452155 (-0.317018) | 1.177718 / 1.492716 (-0.314998) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095557 / 0.018006 (0.077550) | 0.306235 / 0.000490 (0.305745) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018217 / 0.037411 (-0.019194) | 0.060993 / 0.014526 (0.046467) | 0.072748 / 0.176557 (-0.103808) | 0.119357 / 0.737135 (-0.617778) | 0.073719 / 0.296338 (-0.222619) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295924 / 0.215209 (0.080715) | 2.901071 / 2.077655 (0.823417) | 1.497316 / 1.504120 (-0.006804) | 1.371232 / 1.541195 (-0.169962) | 1.395643 / 1.468490 (-0.072847) | 0.577548 / 4.584777 (-4.007229) | 2.383813 / 3.745712 (-1.361899) | 2.764451 / 5.269862 (-2.505411) | 1.733074 / 4.565676 (-2.832602) | 0.063730 / 0.424275 (-0.360545) | 0.004933 / 0.007607 (-0.002674) | 0.347135 / 0.226044 (0.121090) | 3.390814 / 2.268929 (1.121885) | 1.849454 / 55.444624 (-53.595170) | 1.561801 / 6.876477 (-5.314675) | 1.587818 / 2.142072 (-0.554254) | 0.652061 / 4.805227 (-4.153166) | 0.117195 / 6.500664 (-6.383469) | 0.041922 / 0.075469 (-0.033548) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.949050 / 1.841788 (-0.892738) | 11.353664 / 8.074308 (3.279355) | 9.261581 / 10.191392 (-0.929811) | 0.140374 / 0.680424 (-0.540050) | 0.014254 / 0.534201 (-0.519946) | 0.288124 / 0.579283 (-0.291159) | 0.262888 / 0.434364 (-0.171476) | 0.330774 / 0.540337 (-0.209564) | 0.444777 / 1.386936 (-0.942159) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005162 / 0.011353 (-0.006191) | 0.003418 / 0.011008 (-0.007591) | 0.049764 / 0.038508 (0.011256) | 0.029336 / 0.023109 (0.006226) | 0.278570 / 0.275898 (0.002672) | 0.300676 / 0.323480 (-0.022804) | 0.004292 / 0.007986 (-0.003694) | 0.002745 / 0.004328 (-0.001584) | 0.049194 / 0.004250 (0.044943) | 0.044036 / 0.037052 (0.006984) | 0.299258 / 0.258489 (0.040769) | 0.324451 / 0.293841 (0.030610) | 0.029777 / 0.128546 (-0.098769) | 0.010426 / 0.075646 (-0.065221) | 0.057267 / 0.419271 (-0.362004) | 0.051276 / 0.043533 (0.007743) | 0.278012 / 0.255139 (0.022873) | 0.297099 / 0.283200 (0.013899) | 0.018340 / 0.141683 (-0.123343) | 1.179255 / 1.452155 (-0.272899) | 1.231536 / 1.492716 (-0.261180) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092546 / 0.018006 (0.074540) | 0.299959 / 0.000490 (0.299469) | 0.000220 / 0.000200 (0.000020) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021657 / 0.037411 (-0.015755) | 0.075440 / 0.014526 (0.060914) | 0.086246 / 0.176557 (-0.090310) | 0.126511 / 0.737135 (-0.610624) | 0.091303 / 0.296338 (-0.205036) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294775 / 0.215209 (0.079566) | 2.868973 / 2.077655 (0.791319) | 1.666971 / 1.504120 (0.162851) | 1.545680 / 1.541195 (0.004486) | 1.559983 / 1.468490 (0.091493) | 0.572191 / 4.584777 (-4.012586) | 2.429317 / 3.745712 (-1.316395) | 2.673334 / 5.269862 (-2.596527) | 1.758114 / 4.565676 (-2.807563) | 0.063766 / 0.424275 (-0.360509) | 0.005070 / 0.007607 (-0.002537) | 0.345488 / 0.226044 (0.119443) | 3.464525 / 2.268929 (1.195596) | 1.975717 / 55.444624 (-53.468908) | 1.686671 / 6.876477 (-5.189806) | 1.825434 / 2.142072 (-0.316638) | 0.655853 / 4.805227 (-4.149374) | 0.116372 / 6.500664 (-6.384292) | 0.040647 / 0.075469 (-0.034822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.014080 / 1.841788 (-0.827707) | 12.038496 / 8.074308 (3.964188) | 10.354536 / 10.191392 (0.163144) | 0.130285 / 0.680424 (-0.550139) | 0.015514 / 0.534201 (-0.518687) | 0.284743 / 0.579283 (-0.294540) | 0.280275 / 0.434364 (-0.154088) | 0.321175 / 0.540337 (-0.219162) | 0.425840 / 1.386936 (-0.961096) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bb6c6d46a171c4fa1b65167cb81998e2f863892 \"CML watermark\")\n" ]
Change default compression argument for JsonDatasetWriter
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6659/reactions" }
PR_kwDODunzps5mlmmo
{ "diff_url": "https://github.com/huggingface/datasets/pull/6659.diff", "html_url": "https://github.com/huggingface/datasets/pull/6659", "merged_at": "2024-03-01T17:44:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/6659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6659" }
2024-02-11T23:49:07Z
https://api.github.com/repos/huggingface/datasets/issues/6659/comments
Change default compression type from `None` to "infer", to align with pandas' defaults. Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html) for compression, datasets enforce `None` as default. This, likely, confuses user, as they expect the same behaviour, i.e they expect that if they name their output file as "dataset.jsonl.zst" then the compression would be inferred as "zstd" and file will be compressed before writing. Moreover, while it is probably outside of the scope of this pull request, `compression` argument needs to be capable of taking `dict` as input (along with `str`), as it does in pandas, in order to allow user to specify compression parameters. Current implementation will likely fail with `NotImplementedError`, as it expects either `None` or `str` specifying compression algo.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4", "events_url": "https://api.github.com/users/Rexhaif/events{/privacy}", "followers_url": "https://api.github.com/users/Rexhaif/followers", "following_url": "https://api.github.com/users/Rexhaif/following{/other_user}", "gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rexhaif", "id": 5154447, "login": "Rexhaif", "node_id": "MDQ6VXNlcjUxNTQ0NDc=", "organizations_url": "https://api.github.com/users/Rexhaif/orgs", "received_events_url": "https://api.github.com/users/Rexhaif/received_events", "repos_url": "https://api.github.com/users/Rexhaif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions", "type": "User", "url": "https://api.github.com/users/Rexhaif" }
https://api.github.com/repos/huggingface/datasets/issues/6659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6659/timeline
closed
false
6,659
null
2024-03-01T17:44:55Z
null
true
2,129,158,371
https://api.github.com/repos/huggingface/datasets/issues/6658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6658/events
[]
null
2024-07-25T09:17:31Z
[]
https://github.com/huggingface/datasets/pull/6658
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6658). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "would be nice to have this feature in the new dataset release!", "Before finalising this this I'd like to make sure this philosophy makes sense for other libs like `accelerate` for example.\r\n\r\ncc @muellerzr I'd love your feedback on this one\r\ncc @LysandreJik also if you think other people should take a look", "> One design question though: what's the logic behind self._state_dict rather than having it all be state_dict?\r\n\r\nThe `_state_dict` is the internal object that is updated in-place while you iterate on the dataset.\r\n\r\nWe need to copy it every time the user accesses it.\r\n\r\nOtherwise we would get\r\n```python\r\nstate_dict = ds.state_dict()\r\nfor x in ds:\r\n assert ds.state_dict() == state_dict # and actually `assert ds.state_dict() is state_dict`\r\n```\r\n\r\nThe state is updated in-place since it's made of dictionaries that are shared with the steps in the IterableDataset pipeline.", "What do you think of making it a full property with a docstring explicitly stating users shouldn’t call/modify it directly?\r\n\r\nI can imagine some exploratory users getting curious", "I don't think users read docstrings of properties that often. What about explaining the logic in the `.state_dict()` docstring ? This also feels aligned with the way `.state_dict()` and `.load_state_dict()` works in pytorch (you should use load_state_dict to load a modified copy of the state dict)", "Sure, I can agree with that!", "Just a small note mentioning returns a copy of the state dict should be enough imo", "looking forward as well for this PR to be merge", "> I don't think users read docstrings of properties that often. What about explaining the logic in the `.state_dict()` docstring ? This also feels aligned with the way `.state_dict()` and `.load_state_dict()` works in pytorch (you should use load_state_dict to load a modified copy of the state dict)\r\n\r\nHi, I'm experimenting with LLM pretraining using your code. I found that the time of resuming an iterable dataset can be reduced to 5% (my streaming process includes tokenization), but I'm not sure if I'm using it correctly. Could you help me check it? Thanks.\r\n\r\n```\r\nclass CustomTrainer(Trainer):\r\n def _save_rng_state(self, output_dir):\r\n super()._save_rng_state(output_dir)\r\n if self.args.should_save:\r\n with open(os.path.join(output_dir, f'iterable_data_state_dict.json'), 'w', encoding='utf-8') as fo:\r\n json.dump(self.train_dataset.state_dict(), fo, ensure_ascii=False)\r\n```\r\n\r\n```\r\n dataset = <A IterableDataset constructed by (interleave, map(tokenization))>\r\n lask_ckpt_iterable_data_state_dict_file_path = os.path.join(training_args.resume_from_checkpoint, f'iterable_data_state_dict.json')\r\n if os.path.exists(lask_ckpt_iterable_data_state_dict_file_path) and finetuning_args.load_iteratable_state_dict:\r\n if not training_args.ignore_data_skip:\r\n raise ValueError(f'Found `iterable_data_state_dict_file_path`: `{lask_ckpt_iterable_data_state_dict_file_path}`. Please set `ignore_data_skip`=True to skip tokenization.')\r\n with open(lask_ckpt_iterable_data_state_dict_file_path) as f:\r\n lask_ckpt_iterable_data_state_dict = json.load(f)\r\n dataset.load_state_dict(lask_ckpt_iterable_data_state_dict)\r\n logger.info(f'Loading `iterable_data_state_dict` from {lask_ckpt_iterable_data_state_dict_file_path}')\r\n```\r\n", "it sounds good to me :)", "@lhoestq Hi, if I set `prefetch`, does this dataset work well?", "It does work well if you prefetch and then resume from a state, but you might lose the samples that were in the prefetch buffer of the DataLoader (which could be acceptable in some circumstances).\r\n\r\nFortunately we're about to ship an integration with the new StatefulDataLoader from torchdata which can help on this matter :)", "yeah, what I meant is that prefetch might drop a few data entries. really looking forward to the new StatefulDataLoader. :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005788 / 0.011353 (-0.005564) | 0.004036 / 0.011008 (-0.006972) | 0.064720 / 0.038508 (0.026212) | 0.034990 / 0.023109 (0.011881) | 0.245488 / 0.275898 (-0.030410) | 0.272596 / 0.323480 (-0.050884) | 0.003170 / 0.007986 (-0.004815) | 0.002867 / 0.004328 (-0.001461) | 0.049961 / 0.004250 (0.045711) | 0.050951 / 0.037052 (0.013899) | 0.257757 / 0.258489 (-0.000732) | 0.292957 / 0.293841 (-0.000884) | 0.027739 / 0.128546 (-0.100807) | 0.010942 / 0.075646 (-0.064705) | 0.205153 / 0.419271 (-0.214118) | 0.037892 / 0.043533 (-0.005641) | 0.247536 / 0.255139 (-0.007603) | 0.267239 / 0.283200 (-0.015960) | 0.021490 / 0.141683 (-0.120193) | 1.107306 / 1.452155 (-0.344848) | 1.144675 / 1.492716 (-0.348041) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.103212 / 0.018006 (0.085205) | 0.315174 / 0.000490 (0.314684) | 0.000229 / 0.000200 (0.000029) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019771 / 0.037411 (-0.017641) | 0.064033 / 0.014526 (0.049507) | 0.076751 / 0.176557 (-0.099805) | 0.122615 / 0.737135 (-0.614521) | 0.078490 / 0.296338 (-0.217848) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286236 / 0.215209 (0.071027) | 2.841469 / 2.077655 (0.763814) | 1.514079 / 1.504120 (0.009959) | 1.393792 / 1.541195 (-0.147403) | 1.432741 / 1.468490 (-0.035749) | 0.571003 / 4.584777 (-4.013774) | 2.369031 / 3.745712 (-1.376681) | 2.825246 / 5.269862 (-2.444616) | 1.858524 / 4.565676 (-2.707153) | 0.065366 / 0.424275 (-0.358909) | 0.005107 / 0.007607 (-0.002500) | 0.341010 / 0.226044 (0.114965) | 3.443894 / 2.268929 (1.174966) | 1.879192 / 55.444624 (-53.565433) | 1.603046 / 6.876477 (-5.273431) | 1.807639 / 2.142072 (-0.334433) | 0.646726 / 4.805227 (-4.158502) | 0.119409 / 6.500664 (-6.381255) | 0.044564 / 0.075469 (-0.030905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971026 / 1.841788 (-0.870762) | 12.593884 / 8.074308 (4.519576) | 10.305243 / 10.191392 (0.113851) | 0.132018 / 0.680424 (-0.548406) | 0.014387 / 0.534201 (-0.519814) | 0.288597 / 0.579283 (-0.290686) | 0.267373 / 0.434364 (-0.166991) | 0.325626 / 0.540337 (-0.214711) | 0.488808 / 1.386936 (-0.898128) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005991 / 0.011353 (-0.005362) | 0.004028 / 0.011008 (-0.006980) | 0.051951 / 0.038508 (0.013443) | 0.036870 / 0.023109 (0.013761) | 0.263777 / 0.275898 (-0.012122) | 0.290914 / 0.323480 (-0.032566) | 0.004594 / 0.007986 (-0.003392) | 0.002971 / 0.004328 (-0.001357) | 0.049699 / 0.004250 (0.045449) | 0.044939 / 0.037052 (0.007887) | 0.275055 / 0.258489 (0.016566) | 0.316244 / 0.293841 (0.022403) | 0.030501 / 0.128546 (-0.098045) | 0.011197 / 0.075646 (-0.064449) | 0.058718 / 0.419271 (-0.360554) | 0.034926 / 0.043533 (-0.008607) | 0.259172 / 0.255139 (0.004033) | 0.280127 / 0.283200 (-0.003072) | 0.019775 / 0.141683 (-0.121908) | 1.169468 / 1.452155 (-0.282687) | 1.178098 / 1.492716 (-0.314619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101633 / 0.018006 (0.083626) | 0.314684 / 0.000490 (0.314194) | 0.000224 / 0.000200 (0.000024) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024071 / 0.037411 (-0.013341) | 0.079894 / 0.014526 (0.065368) | 0.090915 / 0.176557 (-0.085642) | 0.132397 / 0.737135 (-0.604738) | 0.091919 / 0.296338 (-0.204419) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296237 / 0.215209 (0.081028) | 2.891752 / 2.077655 (0.814097) | 1.551937 / 1.504120 (0.047817) | 1.414179 / 1.541195 (-0.127016) | 1.450192 / 1.468490 (-0.018298) | 0.556272 / 4.584777 (-4.028504) | 0.952374 / 3.745712 (-2.793339) | 2.709450 / 5.269862 (-2.560411) | 1.771251 / 4.565676 (-2.794426) | 0.061873 / 0.424275 (-0.362402) | 0.005058 / 0.007607 (-0.002549) | 0.344790 / 0.226044 (0.118746) | 3.398982 / 2.268929 (1.130053) | 1.905832 / 55.444624 (-53.538792) | 1.632357 / 6.876477 (-5.244120) | 1.822913 / 2.142072 (-0.319160) | 0.643426 / 4.805227 (-4.161802) | 0.117321 / 6.500664 (-6.383343) | 0.042107 / 0.075469 (-0.033363) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974921 / 1.841788 (-0.866867) | 12.497801 / 8.074308 (4.423493) | 11.216174 / 10.191392 (1.024782) | 0.135288 / 0.680424 (-0.545136) | 0.016731 / 0.534201 (-0.517470) | 0.287987 / 0.579283 (-0.291296) | 0.130246 / 0.434364 (-0.304117) | 0.323282 / 0.540337 (-0.217055) | 0.414595 / 1.386936 (-0.972341) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#43fd659eacab37b50abfe7f1b4efe2564055990c \"CML watermark\")\n", "@lhoestq Hello, I'm wondering if there are any solutions to work with shuffle now. I've noticed the caveats in docs, \r\n> examples from shuffle buffers are lost when resuming and the buffers are refilled with new data ", "Hi ! I haven't experimented with implementing state_dict for the shuffle buffer. Not sure if this is a good idea to add this, given a shuffle buffer can be quite big and poses serialization challenges.\r\n\r\nIt shouldn't be difficult to experiment with a simple implementation in `BufferShuffledExamplesIterable` though", "@lhoestq thank you for your quick response! I'll try it :}", "@lhoestq Hi, just revise the `BufferShuffledExamplesIterable` and it works\r\n```py\r\n\r\nclass BufferShuffledExamplesIterable(datasets.iterable_dataset.BufferShuffledExamplesIterable):\r\n\r\n def __init__(self, *args, **kwargs):\r\n super().__init__(*args, **kwargs)\r\n\r\n def _init_state_dict(self) -> dict:\r\n self._state_dict = self.ex_iterable._init_state_dict()\r\n self._state_dict['mem_buffer'] = ([],)\r\n self._state_dict['gloabl_example_index'] = 0\r\n return self._state_dict\r\n\r\n def __iter__(self):\r\n buffer_size = self.buffer_size\r\n rng = deepcopy(self.generator)\r\n indices_iterator = self._iter_random_indices(rng, buffer_size)\r\n # this is the shuffle buffer that we keep in memory\r\n mem_buffer = self._state_dict['mem_buffer'][0]\r\n gloabl_example_index_start = self._state_dict[\"gloabl_example_index\"] if self._state_dict else 0\r\n # skip already consumed ones\r\n for i in range(gloabl_example_index_start):\r\n _ = next(indices_iterator)\r\n for x in self.ex_iterable:\r\n if len(mem_buffer) == buffer_size: # if the buffer is full, pick and example from it\r\n i = next(indices_iterator)\r\n if self._state_dict:\r\n self._state_dict['gloabl_example_index'] += 1\r\n yield mem_buffer[i]\r\n mem_buffer[i] = x # replace the picked example by a new one\r\n else: # otherwise, keep filling the buffer\r\n mem_buffer.append(x)\r\n # when we run out of examples, we shuffle the remaining examples in the buffer and yield them\r\n rng.shuffle(mem_buffer)\r\n yield from mem_buffer\r\n\r\n def shuffle_data_sources(self, generator: np.random.Generator) -> BufferShuffledExamplesIterable:\r\n \"\"\"Shuffle the wrapped examples iterable as well as the shuffling buffer.\"\"\"\r\n return BufferShuffledExamplesIterable(\r\n self.ex_iterable.shuffle_data_sources(generator), buffer_size=self.buffer_size, generator=generator\r\n )\r\n\r\n def shard_data_sources(self, worker_id: int, num_workers: int) -> BufferShuffledExamplesIterable:\r\n \"\"\"Keep only the requested shard.\"\"\"\r\n return BufferShuffledExamplesIterable(\r\n self.ex_iterable.shard_data_sources(worker_id, num_workers),\r\n buffer_size=self.buffer_size,\r\n generator=self.generator,\r\n )\r\n\r\n def load_state_dict(self, state_dict: dict) -> dict:\r\n def _inner_load_state_dict(state, new_state):\r\n if new_state is not None and isinstance(state, dict):\r\n for key in state:\r\n state[key] = _inner_load_state_dict(state[key], new_state[key])\r\n return state\r\n elif new_state is not None and isinstance(state, list):\r\n for i in range(len(state)):\r\n state[i] = _inner_load_state_dict(state[i], new_state[i])\r\n return state\r\n return new_state\r\n\r\n return _inner_load_state_dict(self._state_dict, state_dict)\r\n```\r\n\r\nI've noticed that it uses significantly more RAM than the original version and experiences a considerable decrease in GPU utilization. Could you offer some suggestions to address this issue?\r\n\r\nor **is it prohibited** to maintain sth except for simple indices that small enough for each worker 😢 \r\n\r\n", "Some ExamplesIterable copy and store old versions of the state_dict of parent ExamplesIterable. It is the case for example for batched `map()` (state_dict of beginning of the batch) or `interleave_dataset()` (state_dict of the previous step since it buffers one example to know if the iterable is exhausted).\r\n\r\nCopying a shuffle buffer takes some RAM and some time, which can slow down the data loading pipeline.\r\nMaybe the examples in the shuffle buffer shouldn't not be copied (only do a shallow copy of the list), this would surely help." ]
[Resumable IterableDataset] Add IterableDataset state_dict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 2, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6658/reactions" }
PR_kwDODunzps5mlZyb
{ "diff_url": "https://github.com/huggingface/datasets/pull/6658.diff", "html_url": "https://github.com/huggingface/datasets/pull/6658", "merged_at": "2024-06-03T19:15:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/6658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6658" }
2024-02-11T20:35:52Z
https://api.github.com/repos/huggingface/datasets/issues/6658/comments
A simple implementation of a mechanism to resume an IterableDataset. It works by restarting at the latest shard and skip samples. It provides fast resuming (though not instantaneous). Example: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"a": range(5)}).to_iterable_dataset(num_shards=3) ds = concatenate_datasets([ds] * 2) print(f"{ds.state_dict()=}") for i, example in enumerate(ds): print(example) if i == 6: state_dict = ds.state_dict() print("checkpoint") ds.load_state_dict(state_dict) print(f"resuming from checkpoint {ds.state_dict()=}") for example in ds: print(example) ``` returns ``` ds.state_dict()={'ex_iterable_idx': 0, 'ex_iterables': [{'shard_idx': 0, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 0}]} {'a': 0} {'a': 1} {'a': 2} {'a': 3} {'a': 4} {'a': 0} {'a': 1} checkpoint {'a': 2} {'a': 3} {'a': 4} resuming from checkpoint ds.state_dict()={'ex_iterable_idx': 1, 'ex_iterables': [{'shard_idx': 3, 'shard_example_idx': 0}, {'shard_idx': 0, 'shard_example_idx': 2}]} {'a': 2} {'a': 3} {'a': 4} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6658/timeline
closed
false
6,658
null
2024-06-03T19:15:39Z
null
true
2,129,147,085
https://api.github.com/repos/huggingface/datasets/issues/6657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6657/events
[]
null
2024-03-06T15:06:22Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6657
NONE
completed
null
null
[ "Thanks for reporting, @atulsaurav.\r\n\r\nWe are investigating the issue. ", "I can't fix this issue because I do not appear as a team member of the huggingface datasets project: https://anaconda.org/huggingface/datasets\r\n\r\n@lhoestq could you please add `datasets` team members to the corresponding Anaconda project?\r\n\r\nOnce this done, I could recreate and update the Anaconda token, as mentioned above it seems the current one has expired.", "I think @LysandreJik has access ?", "FYI it failed for 2.18.0 too: https://github.com/huggingface/datasets/actions/runs/8117132330/job/22188677936", "We updated the token and I re-ran the conda releases :)" ]
Release not pushed to conda channel
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6657/reactions" }
I_kwDODunzps5-6DTN
null
2024-02-11T20:05:17Z
https://api.github.com/repos/huggingface/datasets/issues/6657/comments
### Describe the bug The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ? ![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700) ### Steps to reproduce the bug Please see this actions [link](https://github.com/huggingface/datasets/actions/runs/7842473662) ### Expected behavior The action runs successfully and the latest release is pushed to HuggingFace conda channel ### Environment info Not applicable.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7138162?v=4", "events_url": "https://api.github.com/users/atulsaurav/events{/privacy}", "followers_url": "https://api.github.com/users/atulsaurav/followers", "following_url": "https://api.github.com/users/atulsaurav/following{/other_user}", "gists_url": "https://api.github.com/users/atulsaurav/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/atulsaurav", "id": 7138162, "login": "atulsaurav", "node_id": "MDQ6VXNlcjcxMzgxNjI=", "organizations_url": "https://api.github.com/users/atulsaurav/orgs", "received_events_url": "https://api.github.com/users/atulsaurav/received_events", "repos_url": "https://api.github.com/users/atulsaurav/repos", "site_admin": false, "starred_url": "https://api.github.com/users/atulsaurav/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atulsaurav/subscriptions", "type": "User", "url": "https://api.github.com/users/atulsaurav" }
https://api.github.com/repos/huggingface/datasets/issues/6657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6657/timeline
closed
false
6,657
null
2024-03-06T15:06:22Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,127,338,377
https://api.github.com/repos/huggingface/datasets/issues/6656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6656/events
[]
null
2024-03-15T22:18:21Z
[]
https://github.com/huggingface/datasets/issues/6656
NONE
null
null
null
[ "I get similar when dealing with a large jsonl file (6k lines), \r\n\r\n> TypeError: Couldn't cast array of type timestamp[us] to null\r\n\r\nYet when I split it into 1k lines, files, load_dataset works fine!\r\n\r\nhttps://github.com/huggingface/course/issues/692\r\n\r\n" ]
Error when loading a big local json file
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6656/reactions" }
I_kwDODunzps5-zJuJ
null
2024-02-09T15:14:21Z
https://api.github.com/repos/huggingface/datasets/issues/6656/comments
### Describe the bug When trying to load big json files from a local directory, `load_dataset` throws the following error ``` Traceback (most recent call last): File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single writer.write_table(table) File "miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/arrow_writer.py", line 573, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3638, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` ### Steps to reproduce the bug 1. Download a big file, e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-train.json.gz` 2. Load it like `data = load_dataset("json", data_files=["nq-train.json"], split="train")` ```python from datasets import load_dataset data = load_dataset("json", data_files=["nq-train.json"], split="train") ``` A similarly formatted but smaller file, e.g. e.g. `https://dl.fbaipublicfiles.com/dpr/data/retriever/biencoder-nq-dev.json.gz` is loaded without issues ```python from datasets import load_dataset data = load_dataset("json", data_files=["nq-dev.json"], split="train") ``` ### Expected behavior It should load normally ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/10062216?v=4", "events_url": "https://api.github.com/users/Riccorl/events{/privacy}", "followers_url": "https://api.github.com/users/Riccorl/followers", "following_url": "https://api.github.com/users/Riccorl/following{/other_user}", "gists_url": "https://api.github.com/users/Riccorl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Riccorl", "id": 10062216, "login": "Riccorl", "node_id": "MDQ6VXNlcjEwMDYyMjE2", "organizations_url": "https://api.github.com/users/Riccorl/orgs", "received_events_url": "https://api.github.com/users/Riccorl/received_events", "repos_url": "https://api.github.com/users/Riccorl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Riccorl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Riccorl/subscriptions", "type": "User", "url": "https://api.github.com/users/Riccorl" }
https://api.github.com/repos/huggingface/datasets/issues/6656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6656/timeline
open
false
6,656
null
null
null
false
2,127,020,042
https://api.github.com/repos/huggingface/datasets/issues/6655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6655/events
[]
null
2024-02-12T09:35:55Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6655
NONE
null
null
null
[ "Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wondering: does it make sense to use `transformers` in this case, even if we don't need it to load the `go_emotions` dataset (already converted to Parquet files)?\r\n- Maybe @mariosasko can give some insight, as he included these code lines:\r\n - #6454\r\n\r\nhttps://github.com/huggingface/datasets/blob/9751fb14594d354e952f0ebdfaf31cb203b011e7/src/datasets/utils/_dill.py#L60-L63\r\n", "The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n\r\nHowever, the logic does not account for `transformers<3`, so we should add a version check to fix that.", "> The linked code lazily registers a custom reducer for `transformers.PreTrainedTokenizerBase` only if `transformers` have already been imported (imports are expensive, so we check `sys.modules`).\r\n> \r\n> However, the logic does not account for `transformers<3`, so we should add a version check to fix that.\r\n\r\nThank you for that Mario. Would this fix solve the problem and do you have any idea when it will be done? \r\nI tried the pip install suggested by Albert and it made no difference.", "I tried running the code today and the problem appears to be fixed." ]
Cannot load the dataset go_emotions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6655/reactions" }
I_kwDODunzps5-x8AK
null
2024-02-09T12:15:39Z
https://api.github.com/repos/huggingface/datasets/issues/6655/comments
### Describe the bug When I run the following code I get an exception; `go_emotions = load_dataset("go_emotions")` > AttributeError Traceback (most recent call last) Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1) ----> [1](vscode-notebook-cell:?execution_count=6&line=1) go_emotions = load_dataset("go_emotions") [2](vscode-notebook-cell:?execution_count=6&line=2) data = go_emotions.data File [c:\Users\hijik\anaconda3\Lib\site-packages\datasets\load.py:2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2518](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2518) verification_mode = VerificationMode( [2519](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2519) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2520](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2520) ) [2522](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2522) # Create a dataset builder -> [2523](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2523) builder_instance = load_dataset_builder( [2524](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2524) path=path, [2525](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2525) name=name, [2526](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2526) data_dir=data_dir, [2527](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2527) data_files=data_files, [2528](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2528) cache_dir=cache_dir, [2529](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2529) features=features, [2530](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2530) download_config=download_config, [2531](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2531) download_mode=download_mode, [2532](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2532) revision=revision, [2533](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2533) token=token, [2534](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2534) storage_options=storage_options, [2535](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2535) trust_remote_code=trust_remote_code, [2536](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/load.py:2536) _require_default_config_name=name is None, ... ---> [63](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:63) if issubclass(obj_type, transformers.PreTrainedTokenizerBase): [64](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:64) pklregister(obj_type)(_save_transformersPreTrainedTokenizerBase) [66](file:///C:/Users/hijik/anaconda3/Lib/site-packages/datasets/utils/_dill.py:66) # Unwrap `torch.compile`-ed functions AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase' Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?10bc0728-6947-456e-9a3e-f056872b04c6) or open in a [text editor](command:workbench.action.openLargeOutput?10bc0728-6947-456e-9a3e-f056872b04c6). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Steps to reproduce the bug ``` from datasets import load_dataset go_emotions = load_dataset("go_emotions") ``` ### Expected behavior Should simply load the variable with the data from the file ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.16.1 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.11.4 - `huggingface_hub` version: 0.20.3 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/688324?v=4", "events_url": "https://api.github.com/users/arame/events{/privacy}", "followers_url": "https://api.github.com/users/arame/followers", "following_url": "https://api.github.com/users/arame/following{/other_user}", "gists_url": "https://api.github.com/users/arame/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/arame", "id": 688324, "login": "arame", "node_id": "MDQ6VXNlcjY4ODMyNA==", "organizations_url": "https://api.github.com/users/arame/orgs", "received_events_url": "https://api.github.com/users/arame/received_events", "repos_url": "https://api.github.com/users/arame/repos", "site_admin": false, "starred_url": "https://api.github.com/users/arame/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arame/subscriptions", "type": "User", "url": "https://api.github.com/users/arame" }
https://api.github.com/repos/huggingface/datasets/issues/6655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6655/timeline
open
false
6,655
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,126,939,358
https://api.github.com/repos/huggingface/datasets/issues/6654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6654/events
[]
null
2024-02-12T08:26:53Z
[]
https://github.com/huggingface/datasets/issues/6654
NONE
completed
null
null
[ "Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n", "Amazing! It's indeed fixed now. Thanks!" ]
Batched dataset map throws exception that cannot cast fixed length array to Sequence
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6654/reactions" }
I_kwDODunzps5-xoTe
null
2024-02-09T11:23:19Z
https://api.github.com/repos/huggingface/datasets/issues/6654/comments
### Describe the bug I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 2093, failing to correctly process sequence lengths. ### Steps to reproduce the bug Create virtual environment and activate ``` virtualenv venv source venv/bin/activate ``` Then install the datasets package (I'm using the latest version) ``` pip install datasets==2.16.1 ``` Then run ```python # bug.py from datasets import Dataset from datasets.features import Features, Sequence, Value data = { "num": [[1, 2], [3, 4]], } features = Features({'num': Sequence(feature=Value(dtype='int32'), length=2)}) dataset = Dataset.from_dict(data, features=features) dataset.map(lambda x: x, batched=True, batch_size=1) ``` ### Expected behavior I get the following stack trace ``` Map: 50%|█████ | 1/2 [00:00<00:00, 423.92 examples/s] Traceback (most recent call last): File "/PATH/TO/BUG_PORT/bug.py", line 9, in <module> dataset.map(lambda x: x, batched=True, batch_size=1) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single writer.write_batch(batch) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 551, in write_batch array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 1797, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/PATH/TO/BUG_PORT/venv/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type fixed_size_list<item: int32>[2] to Sequence(feature=Value(dtype='int32', id=None), length=2, id=None) ``` After some debugging, I found that the if-statement that is actually failing is line 2093 in `datasets/table.py` ```python # datasets/table.py ... 2093 if feature.length * len(array) == len(array_values): 2094 return pa.FixedSizeListArray.from_arrays(_c(array_values, feature.feature), feature.length) ... ``` ### Environment info Platform: MacOS Datasets version: datasets==2.16.1 Python version: 3.9.6
{ "avatar_url": "https://avatars.githubusercontent.com/u/1029671?v=4", "events_url": "https://api.github.com/users/keesjandevries/events{/privacy}", "followers_url": "https://api.github.com/users/keesjandevries/followers", "following_url": "https://api.github.com/users/keesjandevries/following{/other_user}", "gists_url": "https://api.github.com/users/keesjandevries/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keesjandevries", "id": 1029671, "login": "keesjandevries", "node_id": "MDQ6VXNlcjEwMjk2NzE=", "organizations_url": "https://api.github.com/users/keesjandevries/orgs", "received_events_url": "https://api.github.com/users/keesjandevries/received_events", "repos_url": "https://api.github.com/users/keesjandevries/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keesjandevries/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keesjandevries/subscriptions", "type": "User", "url": "https://api.github.com/users/keesjandevries" }
https://api.github.com/repos/huggingface/datasets/issues/6654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6654/timeline
closed
false
6,654
null
2024-02-12T08:26:53Z
null
false
2,126,831,929
https://api.github.com/repos/huggingface/datasets/issues/6653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6653/events
[]
null
2024-02-09T10:18:20Z
[]
https://github.com/huggingface/datasets/pull/6653
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003424 / 0.011008 (-0.007584) | 0.064195 / 0.038508 (0.025687) | 0.031742 / 0.023109 (0.008633) | 0.244774 / 0.275898 (-0.031124) | 0.268529 / 0.323480 (-0.054951) | 0.003970 / 0.007986 (-0.004016) | 0.002657 / 0.004328 (-0.001672) | 0.048847 / 0.004250 (0.044597) | 0.042196 / 0.037052 (0.005144) | 0.266044 / 0.258489 (0.007555) | 0.282400 / 0.293841 (-0.011441) | 0.027617 / 0.128546 (-0.100929) | 0.010400 / 0.075646 (-0.065246) | 0.205910 / 0.419271 (-0.213362) | 0.035820 / 0.043533 (-0.007713) | 0.247750 / 0.255139 (-0.007389) | 0.267318 / 0.283200 (-0.015882) | 0.017980 / 0.141683 (-0.123703) | 1.107263 / 1.452155 (-0.344892) | 1.173208 / 1.492716 (-0.319509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095830 / 0.018006 (0.077824) | 0.293891 / 0.000490 (0.293401) | 0.000257 / 0.000200 (0.000057) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018138 / 0.037411 (-0.019273) | 0.061631 / 0.014526 (0.047105) | 0.073038 / 0.176557 (-0.103519) | 0.118317 / 0.737135 (-0.618818) | 0.074190 / 0.296338 (-0.222148) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287026 / 0.215209 (0.071817) | 2.786137 / 2.077655 (0.708482) | 1.472575 / 1.504120 (-0.031544) | 1.346919 / 1.541195 (-0.194276) | 1.388535 / 1.468490 (-0.079955) | 0.565731 / 4.584777 (-4.019046) | 2.382573 / 3.745712 (-1.363139) | 2.736926 / 5.269862 (-2.532935) | 1.716517 / 4.565676 (-2.849159) | 0.062168 / 0.424275 (-0.362108) | 0.004924 / 0.007607 (-0.002683) | 0.341897 / 0.226044 (0.115853) | 3.355715 / 2.268929 (1.086787) | 1.837014 / 55.444624 (-53.607611) | 1.532063 / 6.876477 (-5.344414) | 1.548193 / 2.142072 (-0.593880) | 0.634995 / 4.805227 (-4.170232) | 0.115622 / 6.500664 (-6.385042) | 0.042252 / 0.075469 (-0.033217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970713 / 1.841788 (-0.871075) | 11.727576 / 8.074308 (3.653268) | 9.806524 / 10.191392 (-0.384868) | 0.127622 / 0.680424 (-0.552802) | 0.014140 / 0.534201 (-0.520061) | 0.286832 / 0.579283 (-0.292451) | 0.266556 / 0.434364 (-0.167808) | 0.325940 / 0.540337 (-0.214398) | 0.421839 / 1.386936 (-0.965097) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005495 / 0.011353 (-0.005858) | 0.003676 / 0.011008 (-0.007332) | 0.054361 / 0.038508 (0.015853) | 0.030743 / 0.023109 (0.007633) | 0.277200 / 0.275898 (0.001302) | 0.313459 / 0.323480 (-0.010021) | 0.004316 / 0.007986 (-0.003670) | 0.002750 / 0.004328 (-0.001578) | 0.049491 / 0.004250 (0.045241) | 0.044268 / 0.037052 (0.007215) | 0.292529 / 0.258489 (0.034039) | 0.326524 / 0.293841 (0.032683) | 0.048040 / 0.128546 (-0.080507) | 0.010390 / 0.075646 (-0.065256) | 0.058459 / 0.419271 (-0.360813) | 0.033765 / 0.043533 (-0.009768) | 0.276003 / 0.255139 (0.020864) | 0.297299 / 0.283200 (0.014099) | 0.018532 / 0.141683 (-0.123151) | 1.157639 / 1.452155 (-0.294515) | 1.220492 / 1.492716 (-0.272225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093903 / 0.018006 (0.075897) | 0.303005 / 0.000490 (0.302515) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021580 / 0.037411 (-0.015831) | 0.076176 / 0.014526 (0.061650) | 0.086998 / 0.176557 (-0.089558) | 0.124148 / 0.737135 (-0.612987) | 0.088613 / 0.296338 (-0.207725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300623 / 0.215209 (0.085414) | 2.911876 / 2.077655 (0.834221) | 1.588398 / 1.504120 (0.084278) | 1.471251 / 1.541195 (-0.069944) | 1.505528 / 1.468490 (0.037038) | 0.570635 / 4.584777 (-4.014142) | 2.485769 / 3.745712 (-1.259943) | 2.785355 / 5.269862 (-2.484507) | 1.752944 / 4.565676 (-2.812732) | 0.063146 / 0.424275 (-0.361129) | 0.004980 / 0.007607 (-0.002627) | 0.354577 / 0.226044 (0.128532) | 3.477181 / 2.268929 (1.208253) | 1.951906 / 55.444624 (-53.492718) | 1.677169 / 6.876477 (-5.199307) | 1.686338 / 2.142072 (-0.455735) | 0.637156 / 4.805227 (-4.168071) | 0.117732 / 6.500664 (-6.382932) | 0.041091 / 0.075469 (-0.034378) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010071 / 1.841788 (-0.831717) | 12.172242 / 8.074308 (4.097934) | 10.422811 / 10.191392 (0.231419) | 0.137185 / 0.680424 (-0.543239) | 0.014643 / 0.534201 (-0.519558) | 0.287248 / 0.579283 (-0.292035) | 0.272779 / 0.434364 (-0.161585) | 0.331761 / 0.540337 (-0.208576) | 0.417266 / 1.386936 (-0.969670) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9751fb14594d354e952f0ebdfaf31cb203b011e7 \"CML watermark\")\n" ]
Set dev version
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6653/reactions" }
PR_kwDODunzps5mdv5S
{ "diff_url": "https://github.com/huggingface/datasets/pull/6653.diff", "html_url": "https://github.com/huggingface/datasets/pull/6653", "merged_at": "2024-02-09T10:12:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6653.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6653" }
2024-02-09T10:12:02Z
https://api.github.com/repos/huggingface/datasets/issues/6653/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6653/timeline
closed
false
6,653
null
2024-02-09T10:12:12Z
null
true
2,126,760,798
https://api.github.com/repos/huggingface/datasets/issues/6652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6652/events
[]
null
2024-02-09T10:11:48Z
[]
https://github.com/huggingface/datasets/pull/6652
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6652). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005207 / 0.011353 (-0.006145) | 0.003785 / 0.011008 (-0.007223) | 0.064221 / 0.038508 (0.025713) | 0.028981 / 0.023109 (0.005872) | 0.246215 / 0.275898 (-0.029683) | 0.268058 / 0.323480 (-0.055422) | 0.004028 / 0.007986 (-0.003958) | 0.002804 / 0.004328 (-0.001525) | 0.048878 / 0.004250 (0.044627) | 0.042641 / 0.037052 (0.005589) | 0.255590 / 0.258489 (-0.002899) | 0.287377 / 0.293841 (-0.006464) | 0.027772 / 0.128546 (-0.100774) | 0.010637 / 0.075646 (-0.065009) | 0.211526 / 0.419271 (-0.207746) | 0.035789 / 0.043533 (-0.007744) | 0.243042 / 0.255139 (-0.012097) | 0.268369 / 0.283200 (-0.014830) | 0.017907 / 0.141683 (-0.123776) | 1.138829 / 1.452155 (-0.313326) | 1.175732 / 1.492716 (-0.316984) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094205 / 0.018006 (0.076199) | 0.304317 / 0.000490 (0.303827) | 0.000206 / 0.000200 (0.000006) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018424 / 0.037411 (-0.018987) | 0.061719 / 0.014526 (0.047193) | 0.073471 / 0.176557 (-0.103085) | 0.121577 / 0.737135 (-0.615558) | 0.075134 / 0.296338 (-0.221204) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275178 / 0.215209 (0.059969) | 2.689222 / 2.077655 (0.611568) | 1.396680 / 1.504120 (-0.107439) | 1.278782 / 1.541195 (-0.262413) | 1.326632 / 1.468490 (-0.141858) | 0.566915 / 4.584777 (-4.017862) | 2.365928 / 3.745712 (-1.379784) | 2.785435 / 5.269862 (-2.484427) | 1.745131 / 4.565676 (-2.820546) | 0.062798 / 0.424275 (-0.361477) | 0.005107 / 0.007607 (-0.002500) | 0.330441 / 0.226044 (0.104396) | 3.266265 / 2.268929 (0.997337) | 1.792588 / 55.444624 (-53.652036) | 1.516021 / 6.876477 (-5.360455) | 1.562750 / 2.142072 (-0.579323) | 0.652964 / 4.805227 (-4.152264) | 0.117813 / 6.500664 (-6.382852) | 0.042372 / 0.075469 (-0.033097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010107 / 1.841788 (-0.831680) | 11.819910 / 8.074308 (3.745602) | 9.701673 / 10.191392 (-0.489719) | 0.178165 / 0.680424 (-0.502259) | 0.014438 / 0.534201 (-0.519763) | 0.297733 / 0.579283 (-0.281550) | 0.264914 / 0.434364 (-0.169450) | 0.324531 / 0.540337 (-0.215806) | 0.430207 / 1.386936 (-0.956729) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005848 / 0.011353 (-0.005505) | 0.003870 / 0.011008 (-0.007138) | 0.050379 / 0.038508 (0.011871) | 0.031238 / 0.023109 (0.008129) | 0.276839 / 0.275898 (0.000941) | 0.299488 / 0.323480 (-0.023992) | 0.005143 / 0.007986 (-0.002842) | 0.002725 / 0.004328 (-0.001604) | 0.048184 / 0.004250 (0.043934) | 0.046232 / 0.037052 (0.009180) | 0.287058 / 0.258489 (0.028569) | 0.322659 / 0.293841 (0.028818) | 0.047598 / 0.128546 (-0.080949) | 0.011116 / 0.075646 (-0.064530) | 0.058252 / 0.419271 (-0.361019) | 0.033404 / 0.043533 (-0.010128) | 0.277650 / 0.255139 (0.022511) | 0.295610 / 0.283200 (0.012410) | 0.018124 / 0.141683 (-0.123559) | 1.135052 / 1.452155 (-0.317103) | 1.194261 / 1.492716 (-0.298456) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095595 / 0.018006 (0.077588) | 0.306408 / 0.000490 (0.305918) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022027 / 0.037411 (-0.015385) | 0.076224 / 0.014526 (0.061698) | 0.087441 / 0.176557 (-0.089116) | 0.126636 / 0.737135 (-0.610499) | 0.089442 / 0.296338 (-0.206896) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291315 / 0.215209 (0.076106) | 2.835304 / 2.077655 (0.757650) | 1.581102 / 1.504120 (0.076982) | 1.463046 / 1.541195 (-0.078149) | 1.481982 / 1.468490 (0.013492) | 0.559989 / 4.584777 (-4.024788) | 2.385262 / 3.745712 (-1.360450) | 2.773478 / 5.269862 (-2.496383) | 1.744427 / 4.565676 (-2.821249) | 0.062687 / 0.424275 (-0.361589) | 0.005149 / 0.007607 (-0.002458) | 0.374600 / 0.226044 (0.148555) | 3.376507 / 2.268929 (1.107579) | 1.935290 / 55.444624 (-53.509334) | 1.663227 / 6.876477 (-5.213250) | 1.678987 / 2.142072 (-0.463085) | 0.638970 / 4.805227 (-4.166258) | 0.120000 / 6.500664 (-6.380664) | 0.040862 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008795 / 1.841788 (-0.832993) | 12.275084 / 8.074308 (4.200776) | 10.340088 / 10.191392 (0.148696) | 0.136454 / 0.680424 (-0.543970) | 0.014404 / 0.534201 (-0.519797) | 0.289478 / 0.579283 (-0.289805) | 0.279243 / 0.434364 (-0.155121) | 0.330992 / 0.540337 (-0.209346) | 0.422043 / 1.386936 (-0.964893) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70633576ecf1f3f5e5cdfd8c9189246b3604f4b6 \"CML watermark\")\n" ]
Release: 2.17.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6652/reactions" }
PR_kwDODunzps5mdgcv
{ "diff_url": "https://github.com/huggingface/datasets/pull/6652.diff", "html_url": "https://github.com/huggingface/datasets/pull/6652", "merged_at": "2024-02-09T10:05:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6652.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6652" }
2024-02-09T09:25:01Z
https://api.github.com/repos/huggingface/datasets/issues/6652/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6652/timeline
closed
false
6,652
null
2024-02-09T10:05:35Z
null
true
2,126,649,626
https://api.github.com/repos/huggingface/datasets/issues/6651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6651/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-06-14T14:42:46Z
[]
https://github.com/huggingface/datasets/issues/6651
NONE
null
null
null
[]
Slice splits support for datasets.load_from_disk
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/6651/reactions" }
I_kwDODunzps5-whka
null
2024-02-09T08:00:21Z
https://api.github.com/repos/huggingface/datasets/issues/6651/comments
### Feature request Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. ### Motivation Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogenize the APIs of load_from_disk and load_dataset. ### Your contribution Sure, if the devs think the feature request is sensible.
{ "avatar_url": "https://avatars.githubusercontent.com/u/37439882?v=4", "events_url": "https://api.github.com/users/mhorlacher/events{/privacy}", "followers_url": "https://api.github.com/users/mhorlacher/followers", "following_url": "https://api.github.com/users/mhorlacher/following{/other_user}", "gists_url": "https://api.github.com/users/mhorlacher/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mhorlacher", "id": 37439882, "login": "mhorlacher", "node_id": "MDQ6VXNlcjM3NDM5ODgy", "organizations_url": "https://api.github.com/users/mhorlacher/orgs", "received_events_url": "https://api.github.com/users/mhorlacher/received_events", "repos_url": "https://api.github.com/users/mhorlacher/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mhorlacher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhorlacher/subscriptions", "type": "User", "url": "https://api.github.com/users/mhorlacher" }
https://api.github.com/repos/huggingface/datasets/issues/6651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6651/timeline
open
false
6,651
null
null
null
false
2,125,680,991
https://api.github.com/repos/huggingface/datasets/issues/6650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6650/events
[]
null
2024-02-21T00:34:41Z
[]
https://github.com/huggingface/datasets/issues/6650
NONE
null
null
null
[ "Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```", "No, it doesn't, it runs fine. But what's really strange is that the error just went away after I reran the data prep script for conversion from csv to a datasets object. I realize that's not very helpful since the problem isn't reproducible. ", "Feel free to close the issue then :)." ]
AttributeError: 'InMemoryTable' object has no attribute '_batches'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6650/reactions" }
I_kwDODunzps5-s1Ff
null
2024-02-08T17:11:26Z
https://api.github.com/repos/huggingface/datasets/issues/6650/comments
### Describe the bug ``` Traceback (most recent call last): File "finetune.py", line 103, in <module> main(args) File "finetune.py", line 45, in main data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 868, in map { File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.py", line 869, in <dictcomp> k: dataset.map( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3432, in _map_single arrow_formatted_shard = shard.with_format("arrow") File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2667, in with_format dataset = copy.deepcopy(self) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 270, in _reconstruct state = deepcopy(state, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 146, in deepcopy y = copier(x, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 230, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/opt/conda/envs/ptca/lib/python3.8/copy.py", line 153, in deepcopy y = copier(memo) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/table.py", line 176, in __deepcopy__ memo[id(self._batches)] = list(self._batches) AttributeError: 'InMemoryTable' object has no attribute '_batches' ``` ### Steps to reproduce the bug I'm running an MLOps flow using AzureML. The error appears when I run the following function in my training script: ```python data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, seq_length), batched=True, batch_size=batch_size, remove_columns=['col1', 'col2']) ``` ```python def tokenize_function(tok, seq_length, example) # Pad so that each batch has the same sequence length inp = tok(example['col1'], padding=True, truncation=True) outp = tok(example['col2'], padding="max_length", max_length=seq_length) res = { 'input_ids': inp['input_ids'], 'attention_mask': inp['attention_mask'], 'decoder_input_ids': outp['input_ids'], 'labels': outp['input_ids'], 'decoder_attention_mask': outp['attention_mask'] } return res ``` ### Expected behavior Processing proceeds without errors. I ran this same workflow 2 weeks ago without a problem. I recreated the environment since then but it doesn't appear that datasets versions have changed since Dec. '23. ### Environment info datasets 2.16.1 transformers 4.35.2 pyarrow 15.0.0 pyarrow-hotfix 0.6 torch 2.0.1 I'm not using the latest transformers version because there was an error due to a conflict with Azure mlflow when I tried the last time.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13874772?v=4", "events_url": "https://api.github.com/users/matsuobasho/events{/privacy}", "followers_url": "https://api.github.com/users/matsuobasho/followers", "following_url": "https://api.github.com/users/matsuobasho/following{/other_user}", "gists_url": "https://api.github.com/users/matsuobasho/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/matsuobasho", "id": 13874772, "login": "matsuobasho", "node_id": "MDQ6VXNlcjEzODc0Nzcy", "organizations_url": "https://api.github.com/users/matsuobasho/orgs", "received_events_url": "https://api.github.com/users/matsuobasho/received_events", "repos_url": "https://api.github.com/users/matsuobasho/repos", "site_admin": false, "starred_url": "https://api.github.com/users/matsuobasho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matsuobasho/subscriptions", "type": "User", "url": "https://api.github.com/users/matsuobasho" }
https://api.github.com/repos/huggingface/datasets/issues/6650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6650/timeline
open
false
6,650
null
null
null
false
2,124,940,213
https://api.github.com/repos/huggingface/datasets/issues/6649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6649/events
[]
null
2024-02-08T11:23:35Z
[]
https://github.com/huggingface/datasets/pull/6649
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6649). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005197 / 0.011353 (-0.006156) | 0.003469 / 0.011008 (-0.007539) | 0.062306 / 0.038508 (0.023798) | 0.028417 / 0.023109 (0.005308) | 0.241147 / 0.275898 (-0.034751) | 0.270910 / 0.323480 (-0.052569) | 0.003053 / 0.007986 (-0.004933) | 0.003343 / 0.004328 (-0.000985) | 0.048044 / 0.004250 (0.043794) | 0.043738 / 0.037052 (0.006686) | 0.259274 / 0.258489 (0.000785) | 0.282522 / 0.293841 (-0.011319) | 0.027807 / 0.128546 (-0.100739) | 0.010413 / 0.075646 (-0.065234) | 0.206322 / 0.419271 (-0.212950) | 0.035770 / 0.043533 (-0.007763) | 0.243465 / 0.255139 (-0.011674) | 0.261596 / 0.283200 (-0.021604) | 0.018613 / 0.141683 (-0.123070) | 1.115509 / 1.452155 (-0.336645) | 1.189403 / 1.492716 (-0.303314) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.086075 / 0.018006 (0.068069) | 0.296140 / 0.000490 (0.295650) | 0.000198 / 0.000200 (-0.000002) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018238 / 0.037411 (-0.019173) | 0.061783 / 0.014526 (0.047257) | 0.072014 / 0.176557 (-0.104543) | 0.118746 / 0.737135 (-0.618389) | 0.073279 / 0.296338 (-0.223060) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278281 / 0.215209 (0.063072) | 2.772209 / 2.077655 (0.694555) | 1.404503 / 1.504120 (-0.099617) | 1.274753 / 1.541195 (-0.266441) | 1.304394 / 1.468490 (-0.164096) | 0.556903 / 4.584777 (-4.027874) | 2.335428 / 3.745712 (-1.410284) | 2.712255 / 5.269862 (-2.557606) | 1.722252 / 4.565676 (-2.843425) | 0.061268 / 0.424275 (-0.363007) | 0.005029 / 0.007607 (-0.002578) | 0.326112 / 0.226044 (0.100067) | 3.207917 / 2.268929 (0.938988) | 1.743513 / 55.444624 (-53.701111) | 1.476418 / 6.876477 (-5.400059) | 1.489776 / 2.142072 (-0.652297) | 0.628181 / 4.805227 (-4.177046) | 0.115959 / 6.500664 (-6.384706) | 0.041854 / 0.075469 (-0.033615) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969039 / 1.841788 (-0.872749) | 11.178646 / 8.074308 (3.104338) | 9.639716 / 10.191392 (-0.551676) | 0.139750 / 0.680424 (-0.540674) | 0.014230 / 0.534201 (-0.519971) | 0.285318 / 0.579283 (-0.293965) | 0.260788 / 0.434364 (-0.173576) | 0.324183 / 0.540337 (-0.216154) | 0.416326 / 1.386936 (-0.970610) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005149 / 0.011353 (-0.006204) | 0.003469 / 0.011008 (-0.007539) | 0.049761 / 0.038508 (0.011253) | 0.030723 / 0.023109 (0.007614) | 0.271562 / 0.275898 (-0.004336) | 0.297843 / 0.323480 (-0.025637) | 0.004296 / 0.007986 (-0.003690) | 0.002704 / 0.004328 (-0.001624) | 0.048890 / 0.004250 (0.044640) | 0.044776 / 0.037052 (0.007723) | 0.285490 / 0.258489 (0.027001) | 0.312888 / 0.293841 (0.019047) | 0.046239 / 0.128546 (-0.082307) | 0.010238 / 0.075646 (-0.065408) | 0.057968 / 0.419271 (-0.361304) | 0.033295 / 0.043533 (-0.010238) | 0.274320 / 0.255139 (0.019181) | 0.296199 / 0.283200 (0.012999) | 0.017856 / 0.141683 (-0.123827) | 1.147532 / 1.452155 (-0.304622) | 1.211647 / 1.492716 (-0.281070) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089655 / 0.018006 (0.071649) | 0.297275 / 0.000490 (0.296785) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021739 / 0.037411 (-0.015672) | 0.075041 / 0.014526 (0.060515) | 0.085754 / 0.176557 (-0.090802) | 0.124512 / 0.737135 (-0.612623) | 0.086926 / 0.296338 (-0.209412) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.290306 / 0.215209 (0.075097) | 2.847404 / 2.077655 (0.769749) | 1.606175 / 1.504120 (0.102055) | 1.483220 / 1.541195 (-0.057974) | 1.514551 / 1.468490 (0.046061) | 0.559332 / 4.584777 (-4.025445) | 2.403089 / 3.745712 (-1.342624) | 2.715179 / 5.269862 (-2.554683) | 1.688340 / 4.565676 (-2.877337) | 0.062057 / 0.424275 (-0.362218) | 0.004955 / 0.007607 (-0.002652) | 0.338909 / 0.226044 (0.112865) | 3.356882 / 2.268929 (1.087954) | 1.942259 / 55.444624 (-53.502366) | 1.675195 / 6.876477 (-5.201282) | 1.688158 / 2.142072 (-0.453914) | 0.637270 / 4.805227 (-4.167957) | 0.114314 / 6.500664 (-6.386350) | 0.040677 / 0.075469 (-0.034792) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022126 / 1.841788 (-0.819661) | 11.783359 / 8.074308 (3.709051) | 10.247652 / 10.191392 (0.056260) | 0.138188 / 0.680424 (-0.542236) | 0.014850 / 0.534201 (-0.519351) | 0.287414 / 0.579283 (-0.291869) | 0.274393 / 0.434364 (-0.159971) | 0.327255 / 0.540337 (-0.213082) | 0.416355 / 1.386936 (-0.970581) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#727a952367966a98b759d54f333b1e2c28cfd4d4 \"CML watermark\")\n" ]
Minor multi gpu doc improvement
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6649/reactions" }
PR_kwDODunzps5mXRo8
{ "diff_url": "https://github.com/huggingface/datasets/pull/6649.diff", "html_url": "https://github.com/huggingface/datasets/pull/6649", "merged_at": "2024-02-08T11:17:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6649.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6649" }
2024-02-08T11:17:24Z
https://api.github.com/repos/huggingface/datasets/issues/6649/comments
just added torch.no_grad and eval()
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6649/timeline
closed
false
6,649
null
2024-02-08T11:17:35Z
null
true
2,124,813,589
https://api.github.com/repos/huggingface/datasets/issues/6648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6648/events
[]
null
2024-02-08T13:57:41Z
[]
https://github.com/huggingface/datasets/pull/6648
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004951 / 0.011353 (-0.006402) | 0.003187 / 0.011008 (-0.007821) | 0.062959 / 0.038508 (0.024451) | 0.028037 / 0.023109 (0.004928) | 0.241374 / 0.275898 (-0.034524) | 0.262792 / 0.323480 (-0.060688) | 0.004132 / 0.007986 (-0.003854) | 0.002766 / 0.004328 (-0.001563) | 0.051416 / 0.004250 (0.047165) | 0.040957 / 0.037052 (0.003904) | 0.260760 / 0.258489 (0.002271) | 0.282018 / 0.293841 (-0.011823) | 0.027689 / 0.128546 (-0.100857) | 0.010433 / 0.075646 (-0.065214) | 0.211598 / 0.419271 (-0.207674) | 0.035447 / 0.043533 (-0.008086) | 0.244333 / 0.255139 (-0.010806) | 0.263192 / 0.283200 (-0.020008) | 0.016816 / 0.141683 (-0.124867) | 1.103188 / 1.452155 (-0.348967) | 1.179093 / 1.492716 (-0.313623) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092412 / 0.018006 (0.074406) | 0.301226 / 0.000490 (0.300736) | 0.000208 / 0.000200 (0.000008) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018146 / 0.037411 (-0.019265) | 0.061447 / 0.014526 (0.046921) | 0.072162 / 0.176557 (-0.104394) | 0.118965 / 0.737135 (-0.618170) | 0.073756 / 0.296338 (-0.222583) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285361 / 0.215209 (0.070152) | 2.776928 / 2.077655 (0.699273) | 1.506859 / 1.504120 (0.002739) | 1.379119 / 1.541195 (-0.162075) | 1.401798 / 1.468490 (-0.066692) | 0.572512 / 4.584777 (-4.012265) | 2.403793 / 3.745712 (-1.341919) | 2.740496 / 5.269862 (-2.529366) | 1.714611 / 4.565676 (-2.851065) | 0.063496 / 0.424275 (-0.360780) | 0.005009 / 0.007607 (-0.002598) | 0.342438 / 0.226044 (0.116393) | 3.368129 / 2.268929 (1.099200) | 1.831200 / 55.444624 (-53.613424) | 1.553611 / 6.876477 (-5.322866) | 1.578116 / 2.142072 (-0.563956) | 0.653034 / 4.805227 (-4.152193) | 0.117724 / 6.500664 (-6.382940) | 0.041188 / 0.075469 (-0.034282) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972520 / 1.841788 (-0.869268) | 11.186297 / 8.074308 (3.111989) | 9.485829 / 10.191392 (-0.705563) | 0.139715 / 0.680424 (-0.540708) | 0.013705 / 0.534201 (-0.520496) | 0.287384 / 0.579283 (-0.291899) | 0.266784 / 0.434364 (-0.167580) | 0.320789 / 0.540337 (-0.219548) | 0.417484 / 1.386936 (-0.969452) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005570 / 0.011353 (-0.005783) | 0.003416 / 0.011008 (-0.007592) | 0.051160 / 0.038508 (0.012652) | 0.031082 / 0.023109 (0.007973) | 0.279336 / 0.275898 (0.003438) | 0.300529 / 0.323480 (-0.022951) | 0.004320 / 0.007986 (-0.003666) | 0.002781 / 0.004328 (-0.001548) | 0.049642 / 0.004250 (0.045391) | 0.044379 / 0.037052 (0.007327) | 0.293797 / 0.258489 (0.035308) | 0.317844 / 0.293841 (0.024003) | 0.049697 / 0.128546 (-0.078849) | 0.010624 / 0.075646 (-0.065023) | 0.058834 / 0.419271 (-0.360437) | 0.033869 / 0.043533 (-0.009664) | 0.280547 / 0.255139 (0.025408) | 0.300685 / 0.283200 (0.017486) | 0.017010 / 0.141683 (-0.124673) | 1.172277 / 1.452155 (-0.279878) | 1.205359 / 1.492716 (-0.287358) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092914 / 0.018006 (0.074907) | 0.303561 / 0.000490 (0.303071) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.075460 / 0.014526 (0.060934) | 0.085795 / 0.176557 (-0.090762) | 0.124776 / 0.737135 (-0.612360) | 0.088260 / 0.296338 (-0.208079) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302873 / 0.215209 (0.087664) | 2.936173 / 2.077655 (0.858519) | 1.589251 / 1.504120 (0.085131) | 1.477552 / 1.541195 (-0.063643) | 1.479322 / 1.468490 (0.010832) | 0.570481 / 4.584777 (-4.014296) | 2.434137 / 3.745712 (-1.311575) | 2.774012 / 5.269862 (-2.495849) | 1.718103 / 4.565676 (-2.847574) | 0.061951 / 0.424275 (-0.362324) | 0.004992 / 0.007607 (-0.002615) | 0.352250 / 0.226044 (0.126205) | 3.457417 / 2.268929 (1.188488) | 1.934587 / 55.444624 (-53.510037) | 1.646904 / 6.876477 (-5.229573) | 1.669429 / 2.142072 (-0.472643) | 0.649665 / 4.805227 (-4.155562) | 0.116630 / 6.500664 (-6.384034) | 0.040669 / 0.075469 (-0.034800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.011488 / 1.841788 (-0.830300) | 11.866394 / 8.074308 (3.792086) | 10.144588 / 10.191392 (-0.046804) | 0.129931 / 0.680424 (-0.550493) | 0.014885 / 0.534201 (-0.519316) | 0.287463 / 0.579283 (-0.291821) | 0.280754 / 0.434364 (-0.153610) | 0.330139 / 0.540337 (-0.210199) | 0.414653 / 1.386936 (-0.972283) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#585275b8deaebd1bdcbd3725fa63172395791c73 \"CML watermark\")\n" ]
Document usage of hfh cli instead of git
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6648/reactions" }
PR_kwDODunzps5mW1MA
{ "diff_url": "https://github.com/huggingface/datasets/pull/6648.diff", "html_url": "https://github.com/huggingface/datasets/pull/6648", "merged_at": "2024-02-08T13:51:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/6648.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6648" }
2024-02-08T10:24:56Z
https://api.github.com/repos/huggingface/datasets/issues/6648/comments
(basically the same content as the hfh upload docs, but adapted for datasets)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6648/timeline
closed
false
6,648
null
2024-02-08T13:51:39Z
null
true
2,123,397,569
https://api.github.com/repos/huggingface/datasets/issues/6647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6647/events
[]
null
2024-02-08T15:34:17Z
[]
https://github.com/huggingface/datasets/pull/6647
NONE
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it just below, where we present the JSON-Lines example.\r\n> \r\n> * Maybe adding that this format is called JSON-Lines\r\n> * Add the example after the JSON-Lines data example\r\n> \r\n> https://github.com/huggingface/datasets/blob/14d9afbb7ae1b787c450261ca0ff374551993031/docs/source/loading.mdx#L135-L138\r\n\r\nThank you @albertvillanova for the feedback! I moved the jsonl file loading example to a more appropriate location. " ]
Update loading.mdx to include "jsonl" file loading.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6647/reactions" }
PR_kwDODunzps5mSB2B
{ "diff_url": "https://github.com/huggingface/datasets/pull/6647.diff", "html_url": "https://github.com/huggingface/datasets/pull/6647", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6647.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6647" }
2024-02-07T16:18:08Z
https://api.github.com/repos/huggingface/datasets/issues/6647/comments
* A small update to the documentation, noting the ability to load jsonl files.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22236370?v=4", "events_url": "https://api.github.com/users/mosheber/events{/privacy}", "followers_url": "https://api.github.com/users/mosheber/followers", "following_url": "https://api.github.com/users/mosheber/following{/other_user}", "gists_url": "https://api.github.com/users/mosheber/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mosheber", "id": 22236370, "login": "mosheber", "node_id": "MDQ6VXNlcjIyMjM2Mzcw", "organizations_url": "https://api.github.com/users/mosheber/orgs", "received_events_url": "https://api.github.com/users/mosheber/received_events", "repos_url": "https://api.github.com/users/mosheber/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mosheber/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mosheber/subscriptions", "type": "User", "url": "https://api.github.com/users/mosheber" }
https://api.github.com/repos/huggingface/datasets/issues/6647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6647/timeline
open
false
6,647
null
null
null
true
2,123,134,128
https://api.github.com/repos/huggingface/datasets/issues/6646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6646/events
[]
null
2024-02-09T17:43:32Z
[]
https://github.com/huggingface/datasets/pull/6646
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6646). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005598 / 0.011353 (-0.005755) | 0.003640 / 0.011008 (-0.007369) | 0.064557 / 0.038508 (0.026049) | 0.029645 / 0.023109 (0.006536) | 0.243695 / 0.275898 (-0.032203) | 0.261252 / 0.323480 (-0.062228) | 0.004067 / 0.007986 (-0.003919) | 0.002883 / 0.004328 (-0.001446) | 0.049192 / 0.004250 (0.044942) | 0.045299 / 0.037052 (0.008246) | 0.273207 / 0.258489 (0.014718) | 0.288668 / 0.293841 (-0.005173) | 0.028114 / 0.128546 (-0.100432) | 0.010597 / 0.075646 (-0.065049) | 0.215345 / 0.419271 (-0.203927) | 0.036119 / 0.043533 (-0.007414) | 0.243718 / 0.255139 (-0.011421) | 0.266657 / 0.283200 (-0.016543) | 0.018176 / 0.141683 (-0.123507) | 1.127926 / 1.452155 (-0.324229) | 1.168066 / 1.492716 (-0.324650) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096001 / 0.018006 (0.077994) | 0.304317 / 0.000490 (0.303828) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018241 / 0.037411 (-0.019170) | 0.061505 / 0.014526 (0.046979) | 0.072456 / 0.176557 (-0.104101) | 0.118315 / 0.737135 (-0.618821) | 0.075154 / 0.296338 (-0.221184) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278748 / 0.215209 (0.063538) | 2.729923 / 2.077655 (0.652268) | 1.416835 / 1.504120 (-0.087285) | 1.294016 / 1.541195 (-0.247179) | 1.323249 / 1.468490 (-0.145241) | 0.575389 / 4.584777 (-4.009388) | 2.404923 / 3.745712 (-1.340789) | 2.769233 / 5.269862 (-2.500629) | 1.742340 / 4.565676 (-2.823336) | 0.062664 / 0.424275 (-0.361611) | 0.004951 / 0.007607 (-0.002656) | 0.335024 / 0.226044 (0.108979) | 3.291446 / 2.268929 (1.022518) | 1.797095 / 55.444624 (-53.647530) | 1.532963 / 6.876477 (-5.343513) | 1.529315 / 2.142072 (-0.612758) | 0.654922 / 4.805227 (-4.150305) | 0.118772 / 6.500664 (-6.381892) | 0.042034 / 0.075469 (-0.033435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983646 / 1.841788 (-0.858141) | 11.518625 / 8.074308 (3.444317) | 9.538781 / 10.191392 (-0.652611) | 0.140300 / 0.680424 (-0.540124) | 0.013966 / 0.534201 (-0.520235) | 0.287071 / 0.579283 (-0.292212) | 0.270201 / 0.434364 (-0.164163) | 0.323294 / 0.540337 (-0.217044) | 0.418130 / 1.386936 (-0.968806) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005508 / 0.011353 (-0.005844) | 0.003714 / 0.011008 (-0.007294) | 0.050031 / 0.038508 (0.011523) | 0.031866 / 0.023109 (0.008756) | 0.272248 / 0.275898 (-0.003650) | 0.295105 / 0.323480 (-0.028375) | 0.005179 / 0.007986 (-0.002807) | 0.002820 / 0.004328 (-0.001508) | 0.048896 / 0.004250 (0.044646) | 0.045975 / 0.037052 (0.008922) | 0.287662 / 0.258489 (0.029173) | 0.321139 / 0.293841 (0.027298) | 0.049242 / 0.128546 (-0.079304) | 0.010732 / 0.075646 (-0.064914) | 0.057943 / 0.419271 (-0.361328) | 0.033527 / 0.043533 (-0.010006) | 0.271746 / 0.255139 (0.016607) | 0.291404 / 0.283200 (0.008204) | 0.019351 / 0.141683 (-0.122332) | 1.157221 / 1.452155 (-0.294934) | 1.215757 / 1.492716 (-0.276959) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096950 / 0.018006 (0.078944) | 0.312002 / 0.000490 (0.311512) | 0.000223 / 0.000200 (0.000023) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022288 / 0.037411 (-0.015123) | 0.075282 / 0.014526 (0.060756) | 0.087445 / 0.176557 (-0.089112) | 0.125617 / 0.737135 (-0.611519) | 0.088878 / 0.296338 (-0.207460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291961 / 0.215209 (0.076752) | 2.881445 / 2.077655 (0.803790) | 1.586128 / 1.504120 (0.082008) | 1.458636 / 1.541195 (-0.082558) | 1.487001 / 1.468490 (0.018511) | 0.575466 / 4.584777 (-4.009311) | 2.454941 / 3.745712 (-1.290771) | 2.878077 / 5.269862 (-2.391785) | 1.787215 / 4.565676 (-2.778462) | 0.064010 / 0.424275 (-0.360265) | 0.005092 / 0.007607 (-0.002516) | 0.360500 / 0.226044 (0.134455) | 3.465574 / 2.268929 (1.196646) | 1.957516 / 55.444624 (-53.487108) | 1.666282 / 6.876477 (-5.210195) | 1.690070 / 2.142072 (-0.452002) | 0.661323 / 4.805227 (-4.143905) | 0.117824 / 6.500664 (-6.382840) | 0.042286 / 0.075469 (-0.033183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026517 / 1.841788 (-0.815270) | 12.083347 / 8.074308 (4.009039) | 10.269319 / 10.191392 (0.077927) | 0.139253 / 0.680424 (-0.541171) | 0.016258 / 0.534201 (-0.517943) | 0.290583 / 0.579283 (-0.288700) | 0.284338 / 0.434364 (-0.150026) | 0.335865 / 0.540337 (-0.204473) | 0.416600 / 1.386936 (-0.970336) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ba3cfad91e9366cda0ba203700fc745d8bcd1f17 \"CML watermark\")\n", "Thanks, I was needing this example today <3 " ]
Better multi-gpu example
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6646/reactions" }
PR_kwDODunzps5mRIma
{ "diff_url": "https://github.com/huggingface/datasets/pull/6646.diff", "html_url": "https://github.com/huggingface/datasets/pull/6646", "merged_at": "2024-02-07T14:59:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/6646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6646" }
2024-02-07T14:15:01Z
https://api.github.com/repos/huggingface/datasets/issues/6646/comments
Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU the previous example was using a model for translation and the way it was setup was not really the right way to use the model.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6646/timeline
closed
false
6,646
null
2024-02-07T14:59:11Z
null
true
2,122,956,818
https://api.github.com/repos/huggingface/datasets/issues/6645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6645/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-02-29T15:12:19Z
[]
https://github.com/huggingface/datasets/issues/6645
MEMBER
completed
null
null
[ "I'd be very grateful. This upper bound banished me straight into dependency hell today. :(" ]
Support fsspec 2024.2
{ "+1": 8, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 8, "url": "https://api.github.com/repos/huggingface/datasets/issues/6645/reactions" }
I_kwDODunzps5-icAS
null
2024-02-07T12:45:29Z
https://api.github.com/repos/huggingface/datasets/issues/6645/comments
Support fsspec 2024.2. First, we should address: - #6644
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6645/timeline
closed
false
6,645
null
2024-02-29T15:12:19Z
null
false
2,122,955,282
https://api.github.com/repos/huggingface/datasets/issues/6644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6644/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-02-29T15:12:18Z
[]
https://github.com/huggingface/datasets/issues/6644
MEMBER
completed
null
null
[ "The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec related behavior in datasets that needs to be updated to get 2024.2 supported, we'd like to get this conflict resolved as quickly as possible and we're willing to contribute any additional work that's required here.\r\n\r\ncc @dberenbaum" ]
Support fsspec 2023.12
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/6644/reactions" }
I_kwDODunzps5-iboS
null
2024-02-07T12:44:39Z
https://api.github.com/repos/huggingface/datasets/issues/6644/comments
Support fsspec 2023.12 by handling previous and new glob behavior.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6644/timeline
closed
false
6,644
null
2024-02-29T15:12:18Z
null
false
2,121,239,039
https://api.github.com/repos/huggingface/datasets/issues/6643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6643/events
[]
null
2024-02-15T10:29:32Z
[]
https://github.com/huggingface/datasets/issues/6643
NONE
null
null
null
[ "Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)", "Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove the faiss index, as I would want to use it to create batches of retrieved samples from the dataset. \r\nThanks in advance for your help!", "Issue number one seems to be an issue with FAISS indexes not being compatible with copy.deepcopy.\r\n\r\nMaybe you try to not remove the columns, e.g. by passing `remove_unused_columns=False`" ]
Faiss GPU index cannot be serialised when passed to trainer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6643/reactions" }
I_kwDODunzps5-b4n_
null
2024-02-06T16:41:00Z
https://api.github.com/repos/huggingface/datasets/issues/6643/comments
### Describe the bug I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration: 1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error: ``` File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1543, in train return inner_training_loop( File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in _inner_training_loop train_dataloader = self.get_train_dataloader() File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 831, in get_train_dataloader train_dataset = self._remove_unused_columns(train_dataset, description="training") File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/transformers/trainer.py", line 725, in _remove_unused_columns return dataset.remove_columns(ignored_columns) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/fingerprint.py", line 481, in wrapper out = func(dataset, *args, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2146, in remove_columns dataset = copy.deepcopy(self) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct state = deepcopy(state, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 172, in deepcopy y = _reconstruct(x, memo, *rv) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 271, in _reconstruct state = deepcopy(state, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 146, in deepcopy y = copier(x, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 231, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/copy.py", line 161, in deepcopy rv = reductor(4) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 556, in index_getstate return {"this": serialize_index(self).tobytes()} File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/__init__.py", line 1607, in serialize_index write_index(index, writer) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/faiss/swigfaiss.py", line 9843, in write_index return _swigfaiss.write_index(*args) RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /project/faiss/faiss/impl/index_write.cpp:590: don't know how to serialize this type of index ``` The index was created with the add_faiss_index method ``` train_dataset.add_faiss_index( column='embeddings', index_name='embeddings', string_factory=faiss_index_string, train_size=config.faiss_train_size, device=0, # Use -1 for CPU, or specify GPU device ID faiss_verbose=True ) ``` 2. Athough faiss is written to be compatible on the gpu for searching [https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU](https://github.com/facebookresearch/faiss/wiki/Faiss-on-the-GPU) I am getting error when trying to use the hugggingface code to do the search on gpu. This seems to be caused by this line https://github.com/huggingface/datasets/blob/f9975f636542df7f95c27065ea93147440d690b7/src/datasets/search.py#L376 producing error ``` total_scores, total_examples = self.dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 773, in get_nearest_examples_batch total_scores, total_indices = self.search_batch(index_name, queries, k, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 727, in search_batch return self._indexes[index_name].search_batch(queries, k, **kwargs) File "/users/rubman/.conda/envs/protein_npt_env/lib/python3.10/site-packages/datasets/search.py", line 376, in search_batch if not queries.flags.c_contiguous: AttributeError: 'Tensor' object has no attribute 'flags' ``` ### Steps to reproduce the bug ``` train_dataset.add_faiss_index( column='embeddings', index_name='embeddings', string_factory=faiss_index_string, train_size=config.faiss_train_size, device=0, # Use -1 for CPU, or specify GPU device ID faiss_verbose=True ) Trainer( model=model, args=args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=data_collator, tokenizer=tokenizer ) train_dataset.get_nearest_examples_batch('embeddings', embeddings, k=self.k) ``` ### Expected behavior I would expect the faiss database code to be gpu compatible ### Environment info huggingface Version: 2.16.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/56388976?v=4", "events_url": "https://api.github.com/users/rubenweitzman/events{/privacy}", "followers_url": "https://api.github.com/users/rubenweitzman/followers", "following_url": "https://api.github.com/users/rubenweitzman/following{/other_user}", "gists_url": "https://api.github.com/users/rubenweitzman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rubenweitzman", "id": 56388976, "login": "rubenweitzman", "node_id": "MDQ6VXNlcjU2Mzg4OTc2", "organizations_url": "https://api.github.com/users/rubenweitzman/orgs", "received_events_url": "https://api.github.com/users/rubenweitzman/received_events", "repos_url": "https://api.github.com/users/rubenweitzman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rubenweitzman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rubenweitzman/subscriptions", "type": "User", "url": "https://api.github.com/users/rubenweitzman" }
https://api.github.com/repos/huggingface/datasets/issues/6643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6643/timeline
open
false
6,643
null
null
null
false
2,119,085,766
https://api.github.com/repos/huggingface/datasets/issues/6642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6642/events
[]
null
2024-02-06T09:50:19Z
[]
https://github.com/huggingface/datasets/issues/6642
NONE
completed
null
null
[ "I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` compatible dataset in a following way. I created a directory, and just copied jsonl there as `train.jsonl/test.jsonl`.\r\n```python\r\noutput_folder = os.path.join(args.output_folder, f\"{task_meta_type}_{task_type}\")\r\nos.makedirs(output_folder, exist_ok=True)\r\nfile = f\"{task_meta_type}_{task_type}_train.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"train.jsonl\"))\r\n# now test\r\nfile = f\"{task_meta_type}_{task_type}_test.jsonl\"\r\nshutil.copy(os.path.join(input_folder, file),\r\n os.path.join(output_folder, \"test.jsonl\"))\r\n```\r\n", "Hi @MFajcik, \r\n\r\nYou can find information about save_to_disk/load_from_disk in our docs:\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/process#save\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.save_to_disk\r\n- https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.load_from_disk" ]
Differently dataset object saved than it is loaded.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6642/reactions" }
I_kwDODunzps5-Tq7G
null
2024-02-05T17:28:57Z
https://api.github.com/repos/huggingface/datasets/issues/6642/comments
### Describe the bug Differently sized object is saved than it is loaded. ### Steps to reproduce the bug Hi, I save dataset in a following way: ``` dataset = load_dataset("json", data_files={ "train": os.path.join(input_folder, f"{task_meta_type}_{task_type}_train.jsonl"), "test": os.path.join(input_folder, f"{task_meta_type}_{task_type}_test.jsonl")}) print(os.path.join(output_folder, f"{task_meta_type}_{task_type}")) print(f"Length of train dataset: {len(dataset['train'])}") print(f"Length of test dataset: {len(dataset['test'])}") dataset.save_to_disk(os.path.join(output_folder, f"{task_meta_type}_{task_type}")) ``` this yields output ``` .data/hf_dataset/propaganda_zanr Length of train dataset: 7642 Length of test dataset: 1000 ``` Everything looks fine. Then I load the dataset ```python from datasets import load_dataset dataset_path = ".data/hf_dataset/propaganda_zanr" dataset = load_dataset(dataset_path) print(f"Length of train dataset: {len(dataset['train'])}") print(f"Length of test dataset: {len(dataset['test'])}") ``` this prints ``` Generating train split: 1 examples [00:00, 72.10 examples/s] Generating test split: 1 examples [00:00, 100.69 examples/s] Length of train dataset: 1 Length of test dataset: 1 ``` I dont' understand :( ### Expected behavior same object is loaded ### Environment info datasets==2.16.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/31218150?v=4", "events_url": "https://api.github.com/users/MFajcik/events{/privacy}", "followers_url": "https://api.github.com/users/MFajcik/followers", "following_url": "https://api.github.com/users/MFajcik/following{/other_user}", "gists_url": "https://api.github.com/users/MFajcik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MFajcik", "id": 31218150, "login": "MFajcik", "node_id": "MDQ6VXNlcjMxMjE4MTUw", "organizations_url": "https://api.github.com/users/MFajcik/orgs", "received_events_url": "https://api.github.com/users/MFajcik/received_events", "repos_url": "https://api.github.com/users/MFajcik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MFajcik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MFajcik/subscriptions", "type": "User", "url": "https://api.github.com/users/MFajcik" }
https://api.github.com/repos/huggingface/datasets/issues/6642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6642/timeline
closed
false
6,642
null
2024-02-06T09:50:19Z
null
false
2,116,963,132
https://api.github.com/repos/huggingface/datasets/issues/6641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6641/events
[]
null
2024-02-06T09:26:07Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6641
NONE
not_planned
null
null
[ "Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the information you provided, it seems an issue with the specific \"samsum\" dataset. I'm transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/samsum/discussions/5" ]
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6641/reactions" }
I_kwDODunzps5-Lks8
null
2024-02-04T08:49:31Z
https://api.github.com/repos/huggingface/datasets/issues/6641/comments
### Describe the bug unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte ### Steps to reproduce the bug ``` import sys sys.getdefaultencoding() 'utf-8' from datasets import load_dataset print(f"Train dataset size: {len(dataset['train'])}") print(f"Test dataset size: {len(dataset['test'])}") Resolving data files: 100% 159/159 [00:00<00:00, 9909.28it/s] Using custom data configuration samsum-0b1209637541c9e6 Downloading and preparing dataset json/samsum to C:/Users/Administrator/.cache/huggingface/datasets/json/samsum-0b1209637541c9e6/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100% 3/3 [00:00<00:00, 119.99it/s] Extracting data files: 100% 3/3 [00:00<00:00, 9.54it/s] Generating train split: 88392/0 [00:15<00:00, 86848.17 examples/s] Generating test split: 0/0 [00:00<?, ? examples/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:132, in Json._generate_tables(self, files) 131 try: --> 132 pa_table = paj.read_json( 133 io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) 134 ) 135 break File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\_json.pyx:290, in pyarrow._json.read_json() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pyarrow\error.pxi:100, in pyarrow.lib.check_status() ArrowInvalid: JSON parse error: Invalid value. in row 0 During handling of the above exception, another exception occurred: UnicodeDecodeError Traceback (most recent call last) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1819, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1818 _time = time.time() -> 1819 for _, table in generator: 1820 if max_shard_size is not None and writer._num_bytes > max_shard_size: File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\packaged_modules\json\json.py:153, in Json._generate_tables(self, files) 152 with open(file, encoding="utf-8") as f: --> 153 dataset = json.load(f) 154 except json.JSONDecodeError: File ~\AppData\Local\Programs\Python\Python310\lib\json\__init__.py:293, in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 276 """Deserialize ``fp`` (a ``.read()``-supporting file-like object containing 277 a JSON document) to a Python object. 278 (...) 291 kwarg; otherwise ``JSONDecoder`` is used. 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, 295 parse_float=parse_float, parse_int=parse_int, 296 parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File ~\AppData\Local\Programs\Python\Python310\lib\codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call UnicodeDecodeError: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[81], line 5 1 from datasets import load_dataset 3 # Load dataset from the hub 4 #dataset = load_dataset("json",data_files="C:/Users/Administrator/Desktop/samsum/samsum/data/corpus/train.json",field="data") ----> 5 dataset = load_dataset('json',"samsum") 6 #dataset = load_dataset("samsum") 7 print(f"Train dataset size: {len(dataset['train'])}") File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py:1758, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1755 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1757 # Download and prepare data -> 1758 builder_instance.download_and_prepare( 1759 download_config=download_config, 1760 download_mode=download_mode, 1761 ignore_verifications=ignore_verifications, 1762 try_from_hf_gcs=try_from_hf_gcs, 1763 num_proc=num_proc, 1764 ) 1766 # Build dataset for splits 1767 keep_in_memory = ( 1768 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1769 ) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:860, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 858 if num_proc is not None: 859 prepare_split_kwargs["num_proc"] = num_proc --> 860 self._download_and_prepare( 861 dl_manager=dl_manager, 862 verify_infos=verify_infos, 863 **prepare_split_kwargs, 864 **download_and_prepare_kwargs, 865 ) 866 # Sync info 867 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:953, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 949 split_dict.add(split_generator.split_info) 951 try: 952 # Prepare split will record examples associated to the split --> 953 self._prepare_split(split_generator, **prepare_split_kwargs) 954 except OSError as e: 955 raise OSError( 956 "Cannot find data file. " 957 + (self.manual_download_instructions or "") 958 + "\nOriginal error:\n" 959 + str(e) 960 ) from None File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1708, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1706 gen_kwargs = split_generator.gen_kwargs 1707 job_id = 0 -> 1708 for job_id, done, content in self._prepare_split_single( 1709 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1710 ): 1711 if done: 1712 result = content File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\builder.py:1851, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1849 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1850 e = e.__context__ -> 1851 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1853 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior can't load dataset ### Environment info dataset:samsum system :win10 gpu:m40 24G
{ "avatar_url": "https://avatars.githubusercontent.com/u/109789057?v=4", "events_url": "https://api.github.com/users/Hughhuh/events{/privacy}", "followers_url": "https://api.github.com/users/Hughhuh/followers", "following_url": "https://api.github.com/users/Hughhuh/following{/other_user}", "gists_url": "https://api.github.com/users/Hughhuh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hughhuh", "id": 109789057, "login": "Hughhuh", "node_id": "U_kgDOBos_gQ", "organizations_url": "https://api.github.com/users/Hughhuh/orgs", "received_events_url": "https://api.github.com/users/Hughhuh/received_events", "repos_url": "https://api.github.com/users/Hughhuh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hughhuh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hughhuh/subscriptions", "type": "User", "url": "https://api.github.com/users/Hughhuh" }
https://api.github.com/repos/huggingface/datasets/issues/6641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6641/timeline
closed
false
6,641
null
2024-02-06T09:11:45Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,115,864,531
https://api.github.com/repos/huggingface/datasets/issues/6640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6640/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-02-02T21:54:51Z
[]
https://github.com/huggingface/datasets/issues/6640
NONE
null
null
null
[]
Sign Language Support
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6640/reactions" }
I_kwDODunzps5-HYfT
null
2024-02-02T21:54:51Z
https://api.github.com/repos/huggingface/datasets/issues/6640/comments
### Feature request Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html ### Motivation Datasets currently only have labels for several signed languages. There are more signed languages in the world. Furthermore, some signed languages that have a lot of online data cannot be found because of this reason (for instance, German Sign Language, and there is no German Sign Language label on huggingface datasets even though there are a lot of readily available sign language datasets exist for German Sign Language, which are used very frequently in Sign Language Processing papers, and models.) ### Your contribution I can submit a PR for this as well, adding the ISO codes and languages to the labels in datasets.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6684795?v=4", "events_url": "https://api.github.com/users/Merterm/events{/privacy}", "followers_url": "https://api.github.com/users/Merterm/followers", "following_url": "https://api.github.com/users/Merterm/following{/other_user}", "gists_url": "https://api.github.com/users/Merterm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Merterm", "id": 6684795, "login": "Merterm", "node_id": "MDQ6VXNlcjY2ODQ3OTU=", "organizations_url": "https://api.github.com/users/Merterm/orgs", "received_events_url": "https://api.github.com/users/Merterm/received_events", "repos_url": "https://api.github.com/users/Merterm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Merterm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Merterm/subscriptions", "type": "User", "url": "https://api.github.com/users/Merterm" }
https://api.github.com/repos/huggingface/datasets/issues/6640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6640/timeline
open
false
6,640
null
null
null
false
2,114,620,200
https://api.github.com/repos/huggingface/datasets/issues/6639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6639/events
[]
null
2024-02-06T16:54:22Z
[]
https://github.com/huggingface/datasets/pull/6639
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
Run download_and_prepare if missing splits
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6639/reactions" }
PR_kwDODunzps5l0KPG
{ "diff_url": "https://github.com/huggingface/datasets/pull/6639.diff", "html_url": "https://github.com/huggingface/datasets/pull/6639", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6639.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6639" }
2024-02-02T10:36:49Z
https://api.github.com/repos/huggingface/datasets/issues/6639/comments
A first step towards https://github.com/huggingface/datasets/issues/6529
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6639/timeline
open
false
6,639
null
null
null
true
2,113,329,257
https://api.github.com/repos/huggingface/datasets/issues/6638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6638/events
[]
null
2024-02-01T20:07:29Z
[]
https://github.com/huggingface/datasets/issues/6638
NONE
completed
null
null
[ "Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\n```\r\n\r\nCould you explain which is the minimum version that fixes this?\r\nEdit: Looks like that's 2.16.0, will close out issue" ]
Cannot download wmt16 dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6638/reactions" }
I_kwDODunzps599thp
null
2024-02-01T19:41:42Z
https://api.github.com/repos/huggingface/datasets/issues/6638/comments
### Describe the bug As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative? ``` Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "test.py", line 2, in <module> raw_datasets = load_dataset("wmt16","ro-en",split="train") File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2153, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1717, in _download_and_prepare super()._download_and_prepare( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 1027, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/746749a11d25c02058042da7502d973ff410e73457f3d305fc1177dc0e8c4227/wmt_utils.py", line 754, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 565, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 428, in download downloaded_path_or_paths = map_nested( File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 464, in map_nested mapped = [ File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 465, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 384, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py", line 367, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py", line 454, in _download return cached_path(url_or_filename, download_config=download_config) File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 182, in cached_path output_path = get_from_cache( File "/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py", line 596, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz ``` ### Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("wmt16","ro-en",split="train") ``` ### Expected behavior Expect the dataset to be downloaded/ at least a clean exit with error explaining dataset is missing and a suggestion for next steps ### Environment info - `datasets` version: 2.14.7 - Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.17.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/81709031?v=4", "events_url": "https://api.github.com/users/vidyasiv/events{/privacy}", "followers_url": "https://api.github.com/users/vidyasiv/followers", "following_url": "https://api.github.com/users/vidyasiv/following{/other_user}", "gists_url": "https://api.github.com/users/vidyasiv/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vidyasiv", "id": 81709031, "login": "vidyasiv", "node_id": "MDQ6VXNlcjgxNzA5MDMx", "organizations_url": "https://api.github.com/users/vidyasiv/orgs", "received_events_url": "https://api.github.com/users/vidyasiv/received_events", "repos_url": "https://api.github.com/users/vidyasiv/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vidyasiv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vidyasiv/subscriptions", "type": "User", "url": "https://api.github.com/users/vidyasiv" }
https://api.github.com/repos/huggingface/datasets/issues/6638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6638/timeline
closed
false
6,638
null
2024-02-01T20:07:29Z
null
false
2,113,025,975
https://api.github.com/repos/huggingface/datasets/issues/6637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6637/events
[]
null
2024-02-05T10:43:47Z
[]
https://github.com/huggingface/datasets/issues/6637
NONE
null
null
null
[ "The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `BufferShuffledExamplesIterable.iter_arrow()` (same as regular `BufferShuffledExamplesIterable.__iter__()` but yields Arrow tables)\r\n\r\nhttps://github.com/huggingface/datasets/blob/b7d854b7fd3e9a330e21b76ee8421d4a7ebb4a7a/src/datasets/iterable_dataset.py#L968-L974\r\n" ]
'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 3, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/6637/reactions" }
I_kwDODunzps598je3
null
2024-02-01T17:16:54Z
https://api.github.com/repos/huggingface/datasets/issues/6637/comments
### Describe the bug If you: 1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset 2. Set the output format to torch tensors with .with_format('torch') Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch formatting. ### Steps to reproduce the bug ```python import datasets import torch from tqdm import tqdm rand_a = torch.randn(3,224,224) rand_b = torch.randn(3,224,224) a = torch.stack([rand_a] * 1000) b = torch.stack([rand_b] * 1000) features = datasets.Features({"tensor": datasets.Array3D(shape=(3,224,224), dtype="float32")}) ds_a = datasets.Dataset.from_dict({"tensor": a}, features=features).to_iterable_dataset() ds_b = datasets.Dataset.from_dict({"tensor": b}, features=features).to_iterable_dataset() # Iterating through either dataset with torch formatting is really fast (2000it/s on my machine) for example in tqdm(ds_a.with_format('torch')): pass # Iterating through either dataset shuffled is also pretty fast (100it/s on my machine) for example in tqdm(ds_a.shuffle()): pass # Iterating through this interleaved dataset is pretty fast (200it/s on my machine) ds_fast = datasets.interleave_datasets([ds_a, ds_b]) for example in tqdm(ds_fast): pass # Iterating through either dataset with torch formatting *after shuffling* is really slow... (<2it/s on my machine) for example in tqdm(ds_a.shuffle().with_format('torch')): pass # Iterating through this torch formatted interleaved dataset is also really slow (<2it/s on my machine)... ds_slow = datasets.interleave_datasets([ds_a, ds_b]).with_format('torch') for example in tqdm(ds_slow): pass # Even doing this is way faster!! (70it/s on my machine) for example in tqdm(ds_fast): test = torch.tensor(example['tensor']) ``` ### Expected behavior Applying torch formatting to the interleaved dataset shouldn't increase the time taken to iterate through the dataset by very much, since even explicitly converting every example is over 70x faster than calling .with_format('torch'). ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.38 - Python version: 3.11.6 - `huggingface_hub` version: 0.20.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/22883190?v=4", "events_url": "https://api.github.com/users/tobycrisford/events{/privacy}", "followers_url": "https://api.github.com/users/tobycrisford/followers", "following_url": "https://api.github.com/users/tobycrisford/following{/other_user}", "gists_url": "https://api.github.com/users/tobycrisford/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tobycrisford", "id": 22883190, "login": "tobycrisford", "node_id": "MDQ6VXNlcjIyODgzMTkw", "organizations_url": "https://api.github.com/users/tobycrisford/orgs", "received_events_url": "https://api.github.com/users/tobycrisford/received_events", "repos_url": "https://api.github.com/users/tobycrisford/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tobycrisford/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tobycrisford/subscriptions", "type": "User", "url": "https://api.github.com/users/tobycrisford" }
https://api.github.com/repos/huggingface/datasets/issues/6637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6637/timeline
open
false
6,637
null
null
null
false
2,110,781,097
https://api.github.com/repos/huggingface/datasets/issues/6636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6636/events
[]
null
2024-02-07T19:39:00Z
[]
https://github.com/huggingface/datasets/pull/6636
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6636). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @mariosasko, I made the changes. However, I did some tests with `map` and I still saw that it took ~3.5 minutes per batch on 6000 features when using `dataset.map(lambda x: x, batched=True)`. From the profile, the culprits were mainly with `ArrowWriter.write_batch` and `ArrowWriter._build_writer`. The slow down from `_build_writer` is due to updating existing features with the inferred ones. I don't think this can be optimized any further, but fortunately, I can avoid this by setting the `features` in `map`. On the other hand, `write_batch` selects cols based on intersection and difference between schema names and example keys using two for loops. The same exists in `ArrowWriter.write_examples_on_file`. Optimizing the column selection using set operations effectively brings it from 3.5 minutes per batch down to 6 seconds per batch. Can we add these changes along with this PR?\r\n\r\nEdit: Ah just realized you can avoid the issue with inferring features altogether when you set the format to arrow (or pandas).", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004990 / 0.011353 (-0.006363) | 0.003138 / 0.011008 (-0.007870) | 0.062368 / 0.038508 (0.023860) | 0.028634 / 0.023109 (0.005524) | 0.241297 / 0.275898 (-0.034601) | 0.264433 / 0.323480 (-0.059047) | 0.003133 / 0.007986 (-0.004852) | 0.003444 / 0.004328 (-0.000885) | 0.048522 / 0.004250 (0.044271) | 0.043700 / 0.037052 (0.006648) | 0.257054 / 0.258489 (-0.001435) | 0.277551 / 0.293841 (-0.016290) | 0.027132 / 0.128546 (-0.101414) | 0.010395 / 0.075646 (-0.065251) | 0.208003 / 0.419271 (-0.211269) | 0.035814 / 0.043533 (-0.007719) | 0.250098 / 0.255139 (-0.005041) | 0.266726 / 0.283200 (-0.016474) | 0.018424 / 0.141683 (-0.123259) | 1.129242 / 1.452155 (-0.322912) | 1.167674 / 1.492716 (-0.325042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091808 / 0.018006 (0.073802) | 0.298726 / 0.000490 (0.298236) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019119 / 0.037411 (-0.018292) | 0.061969 / 0.014526 (0.047443) | 0.073392 / 0.176557 (-0.103165) | 0.119460 / 0.737135 (-0.617675) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281435 / 0.215209 (0.066226) | 2.702094 / 2.077655 (0.624439) | 1.411541 / 1.504120 (-0.092579) | 1.284084 / 1.541195 (-0.257111) | 1.302638 / 1.468490 (-0.165852) | 0.562420 / 4.584777 (-4.022357) | 2.364890 / 3.745712 (-1.380822) | 2.744033 / 5.269862 (-2.525828) | 1.699000 / 4.565676 (-2.866677) | 0.062315 / 0.424275 (-0.361961) | 0.004982 / 0.007607 (-0.002625) | 0.334385 / 0.226044 (0.108341) | 3.203268 / 2.268929 (0.934339) | 1.766998 / 55.444624 (-53.677627) | 1.497164 / 6.876477 (-5.379313) | 1.509996 / 2.142072 (-0.632077) | 0.633014 / 4.805227 (-4.172213) | 0.115317 / 6.500664 (-6.385347) | 0.041120 / 0.075469 (-0.034349) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965877 / 1.841788 (-0.875911) | 11.219909 / 8.074308 (3.145601) | 9.333822 / 10.191392 (-0.857570) | 0.136482 / 0.680424 (-0.543941) | 0.013632 / 0.534201 (-0.520569) | 0.287251 / 0.579283 (-0.292032) | 0.262786 / 0.434364 (-0.171578) | 0.322893 / 0.540337 (-0.217444) | 0.418180 / 1.386936 (-0.968756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005444 / 0.011353 (-0.005909) | 0.003147 / 0.011008 (-0.007862) | 0.049242 / 0.038508 (0.010734) | 0.030944 / 0.023109 (0.007834) | 0.281901 / 0.275898 (0.006003) | 0.303820 / 0.323480 (-0.019660) | 0.004326 / 0.007986 (-0.003659) | 0.002696 / 0.004328 (-0.001632) | 0.048306 / 0.004250 (0.044055) | 0.044145 / 0.037052 (0.007093) | 0.297253 / 0.258489 (0.038764) | 0.324062 / 0.293841 (0.030221) | 0.046724 / 0.128546 (-0.081823) | 0.010079 / 0.075646 (-0.065567) | 0.057635 / 0.419271 (-0.361636) | 0.033621 / 0.043533 (-0.009912) | 0.282303 / 0.255139 (0.027164) | 0.300761 / 0.283200 (0.017561) | 0.017116 / 0.141683 (-0.124567) | 1.156519 / 1.452155 (-0.295636) | 1.216087 / 1.492716 (-0.276630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093011 / 0.018006 (0.075005) | 0.301310 / 0.000490 (0.300820) | 0.000223 / 0.000200 (0.000023) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023112 / 0.037411 (-0.014299) | 0.075192 / 0.014526 (0.060666) | 0.086213 / 0.176557 (-0.090343) | 0.125853 / 0.737135 (-0.611282) | 0.087754 / 0.296338 (-0.208585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301095 / 0.215209 (0.085886) | 2.911769 / 2.077655 (0.834114) | 1.614708 / 1.504120 (0.110588) | 1.494497 / 1.541195 (-0.046698) | 1.506978 / 1.468490 (0.038488) | 0.572743 / 4.584777 (-4.012034) | 2.417142 / 3.745712 (-1.328570) | 2.755338 / 5.269862 (-2.514523) | 1.711026 / 4.565676 (-2.854650) | 0.062732 / 0.424275 (-0.361543) | 0.005031 / 0.007607 (-0.002576) | 0.352343 / 0.226044 (0.126298) | 3.465183 / 2.268929 (1.196255) | 1.958795 / 55.444624 (-53.485829) | 1.682239 / 6.876477 (-5.194238) | 1.688897 / 2.142072 (-0.453176) | 0.643311 / 4.805227 (-4.161916) | 0.115426 / 6.500664 (-6.385238) | 0.040338 / 0.075469 (-0.035131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005322 / 1.841788 (-0.836466) | 11.779380 / 8.074308 (3.705072) | 10.041574 / 10.191392 (-0.149818) | 0.127617 / 0.680424 (-0.552807) | 0.015840 / 0.534201 (-0.518361) | 0.286905 / 0.579283 (-0.292378) | 0.275180 / 0.434364 (-0.159183) | 0.332498 / 0.540337 (-0.207840) | 0.410719 / 1.386936 (-0.976217) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#32b206d47f582380f9c64578dcfa6c48252db3b8 \"CML watermark\")\n" ]
Faster column validation and reordering
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6636/reactions" }
PR_kwDODunzps5lm4zI
{ "diff_url": "https://github.com/huggingface/datasets/pull/6636.diff", "html_url": "https://github.com/huggingface/datasets/pull/6636", "merged_at": "2024-02-06T23:03:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6636.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6636" }
2024-01-31T19:08:28Z
https://api.github.com/repos/huggingface/datasets/issues/6636/comments
I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit is when the following check is performed: `any(col not in self._data.column_names for col in columns)`. Replacing this by `set(columns) - (self._data.column_names)` is more efficient.
{ "avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4", "events_url": "https://api.github.com/users/psmyth94/events{/privacy}", "followers_url": "https://api.github.com/users/psmyth94/followers", "following_url": "https://api.github.com/users/psmyth94/following{/other_user}", "gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/psmyth94", "id": 11325244, "login": "psmyth94", "node_id": "MDQ6VXNlcjExMzI1MjQ0", "organizations_url": "https://api.github.com/users/psmyth94/orgs", "received_events_url": "https://api.github.com/users/psmyth94/received_events", "repos_url": "https://api.github.com/users/psmyth94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions", "type": "User", "url": "https://api.github.com/users/psmyth94" }
https://api.github.com/repos/huggingface/datasets/issues/6636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6636/timeline
closed
false
6,636
null
2024-02-06T23:03:38Z
null
true
2,110,659,519
https://api.github.com/repos/huggingface/datasets/issues/6635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6635/events
[]
null
2024-02-07T16:48:55Z
[]
https://github.com/huggingface/datasets/pull/6635
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6635). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005577 / 0.011353 (-0.005776) | 0.004452 / 0.011008 (-0.006556) | 0.067849 / 0.038508 (0.029341) | 0.032328 / 0.023109 (0.009219) | 0.256924 / 0.275898 (-0.018974) | 0.273410 / 0.323480 (-0.050070) | 0.004359 / 0.007986 (-0.003626) | 0.003484 / 0.004328 (-0.000845) | 0.053880 / 0.004250 (0.049630) | 0.058142 / 0.037052 (0.021089) | 0.268863 / 0.258489 (0.010374) | 0.307977 / 0.293841 (0.014136) | 0.028840 / 0.128546 (-0.099707) | 0.011808 / 0.075646 (-0.063839) | 0.216277 / 0.419271 (-0.202995) | 0.039245 / 0.043533 (-0.004288) | 0.250420 / 0.255139 (-0.004719) | 0.273642 / 0.283200 (-0.009557) | 0.019340 / 0.141683 (-0.122342) | 1.176734 / 1.452155 (-0.275421) | 1.250643 / 1.492716 (-0.242074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181210 / 0.018006 (0.163204) | 1.070750 / 0.000490 (1.070261) | 0.000315 / 0.000200 (0.000115) | 0.000045 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022905 / 0.037411 (-0.014507) | 0.064549 / 0.014526 (0.050023) | 0.077113 / 0.176557 (-0.099443) | 0.131976 / 0.737135 (-0.605159) | 0.081266 / 0.296338 (-0.215072) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291363 / 0.215209 (0.076154) | 2.851691 / 2.077655 (0.774036) | 1.592815 / 1.504120 (0.088695) | 1.494550 / 1.541195 (-0.046645) | 1.516464 / 1.468490 (0.047974) | 0.583244 / 4.584777 (-4.001532) | 2.504907 / 3.745712 (-1.240805) | 3.183490 / 5.269862 (-2.086371) | 1.932854 / 4.565676 (-2.632823) | 0.067564 / 0.424275 (-0.356711) | 0.006587 / 0.007607 (-0.001020) | 0.346368 / 0.226044 (0.120324) | 3.428256 / 2.268929 (1.159327) | 1.994176 / 55.444624 (-53.450448) | 1.688116 / 6.876477 (-5.188360) | 1.767653 / 2.142072 (-0.374420) | 0.673867 / 4.805227 (-4.131360) | 0.125582 / 6.500664 (-6.375082) | 0.047198 / 0.075469 (-0.028271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.002895 / 1.841788 (-0.838893) | 16.332893 / 8.074308 (8.258585) | 10.781993 / 10.191392 (0.590601) | 0.153919 / 0.680424 (-0.526505) | 0.015528 / 0.534201 (-0.518673) | 0.306182 / 0.579283 (-0.273101) | 0.296380 / 0.434364 (-0.137984) | 0.341432 / 0.540337 (-0.198905) | 0.455900 / 1.386936 (-0.931036) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006442 / 0.011353 (-0.004911) | 0.004433 / 0.011008 (-0.006576) | 0.053327 / 0.038508 (0.014819) | 0.035966 / 0.023109 (0.012856) | 0.280913 / 0.275898 (0.005015) | 0.308419 / 0.323480 (-0.015061) | 0.005842 / 0.007986 (-0.002144) | 0.003789 / 0.004328 (-0.000539) | 0.053983 / 0.004250 (0.049732) | 0.069052 / 0.037052 (0.032000) | 0.299225 / 0.258489 (0.040736) | 0.336470 / 0.293841 (0.042629) | 0.068170 / 0.128546 (-0.060377) | 0.012259 / 0.075646 (-0.063388) | 0.064166 / 0.419271 (-0.355106) | 0.037291 / 0.043533 (-0.006241) | 0.281318 / 0.255139 (0.026179) | 0.297093 / 0.283200 (0.013893) | 0.021358 / 0.141683 (-0.120324) | 1.189584 / 1.452155 (-0.262571) | 1.256985 / 1.492716 (-0.235731) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216726 / 0.018006 (0.198720) | 2.496957 / 0.000490 (2.496467) | 0.000336 / 0.000200 (0.000136) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026604 / 0.037411 (-0.010807) | 0.080398 / 0.014526 (0.065873) | 0.094475 / 0.176557 (-0.082082) | 0.136263 / 0.737135 (-0.600873) | 0.097898 / 0.296338 (-0.198440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295171 / 0.215209 (0.079962) | 2.947530 / 2.077655 (0.869875) | 1.607531 / 1.504120 (0.103411) | 1.485045 / 1.541195 (-0.056150) | 1.524899 / 1.468490 (0.056409) | 0.572934 / 4.584777 (-4.011843) | 2.544320 / 3.745712 (-1.201393) | 3.292630 / 5.269862 (-1.977232) | 1.927138 / 4.565676 (-2.638539) | 0.068560 / 0.424275 (-0.355715) | 0.005982 / 0.007607 (-0.001625) | 0.345833 / 0.226044 (0.119789) | 3.424253 / 2.268929 (1.155324) | 2.195017 / 55.444624 (-53.249608) | 1.712037 / 6.876477 (-5.164440) | 1.763899 / 2.142072 (-0.378174) | 0.653776 / 4.805227 (-4.151451) | 0.123056 / 6.500664 (-6.377609) | 0.044572 / 0.075469 (-0.030897) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.033400 / 1.841788 (-0.808388) | 15.409887 / 8.074308 (7.335579) | 11.220990 / 10.191392 (1.029597) | 0.153603 / 0.680424 (-0.526821) | 0.016866 / 0.534201 (-0.517335) | 0.311945 / 0.579283 (-0.267338) | 0.307048 / 0.434364 (-0.127316) | 0.350422 / 0.540337 (-0.189915) | 0.447308 / 1.386936 (-0.939628) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#14d9afbb7ae1b787c450261ca0ff374551993031 \"CML watermark\")\n" ]
Fix missing info when loading some datasets from Parquet export
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6635/reactions" }
PR_kwDODunzps5lmeNO
{ "diff_url": "https://github.com/huggingface/datasets/pull/6635.diff", "html_url": "https://github.com/huggingface/datasets/pull/6635", "merged_at": "2024-02-07T16:41:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6635.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6635" }
2024-01-31T17:55:21Z
https://api.github.com/repos/huggingface/datasets/issues/6635/comments
Fix getting the info for script-based datasets with Parquet export with a single config not named "default". E.g. ```python from datasets import load_dataset_builder b = load_dataset_builder("bookcorpus") print(b.info.features) # should print {'text': Value(dtype='string', id=None)} ``` I fixed this by setting the default config name when there is only one config.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6635/timeline
closed
false
6,635
null
2024-02-07T16:41:04Z
null
true
2,110,242,376
https://api.github.com/repos/huggingface/datasets/issues/6634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6634/events
[]
null
2024-02-05T10:32:49Z
[]
https://github.com/huggingface/datasets/pull/6634
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the next release.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005125 / 0.011353 (-0.006228) | 0.003772 / 0.011008 (-0.007236) | 0.063258 / 0.038508 (0.024750) | 0.029479 / 0.023109 (0.006370) | 0.245554 / 0.275898 (-0.030344) | 0.266395 / 0.323480 (-0.057085) | 0.003063 / 0.007986 (-0.004922) | 0.003298 / 0.004328 (-0.001031) | 0.049242 / 0.004250 (0.044991) | 0.042390 / 0.037052 (0.005338) | 0.258176 / 0.258489 (-0.000313) | 0.279935 / 0.293841 (-0.013906) | 0.027910 / 0.128546 (-0.100637) | 0.011033 / 0.075646 (-0.064613) | 0.207763 / 0.419271 (-0.211509) | 0.036127 / 0.043533 (-0.007405) | 0.247363 / 0.255139 (-0.007776) | 0.261309 / 0.283200 (-0.021890) | 0.020259 / 0.141683 (-0.121424) | 1.152760 / 1.452155 (-0.299395) | 1.194853 / 1.492716 (-0.297863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088936 / 0.018006 (0.070930) | 0.298396 / 0.000490 (0.297906) | 0.000211 / 0.000200 (0.000011) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018434 / 0.037411 (-0.018977) | 0.061991 / 0.014526 (0.047466) | 0.072786 / 0.176557 (-0.103771) | 0.120437 / 0.737135 (-0.616698) | 0.078375 / 0.296338 (-0.217964) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275821 / 0.215209 (0.060612) | 2.703358 / 2.077655 (0.625703) | 1.446783 / 1.504120 (-0.057337) | 1.333556 / 1.541195 (-0.207639) | 1.325753 / 1.468490 (-0.142737) | 0.565196 / 4.584777 (-4.019581) | 2.411193 / 3.745712 (-1.334520) | 2.702764 / 5.269862 (-2.567098) | 1.727425 / 4.565676 (-2.838252) | 0.062966 / 0.424275 (-0.361309) | 0.004985 / 0.007607 (-0.002622) | 0.333473 / 0.226044 (0.107428) | 3.270615 / 2.268929 (1.001687) | 1.822213 / 55.444624 (-53.622411) | 1.546572 / 6.876477 (-5.329905) | 1.568767 / 2.142072 (-0.573305) | 0.655907 / 4.805227 (-4.149321) | 0.117173 / 6.500664 (-6.383491) | 0.042415 / 0.075469 (-0.033054) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.987966 / 1.841788 (-0.853822) | 11.851206 / 8.074308 (3.776898) | 10.327751 / 10.191392 (0.136359) | 0.127929 / 0.680424 (-0.552494) | 0.013781 / 0.534201 (-0.520420) | 0.286910 / 0.579283 (-0.292373) | 0.273615 / 0.434364 (-0.160749) | 0.323373 / 0.540337 (-0.216965) | 0.426407 / 1.386936 (-0.960529) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005412 / 0.011353 (-0.005941) | 0.003619 / 0.011008 (-0.007389) | 0.049603 / 0.038508 (0.011095) | 0.031246 / 0.023109 (0.008136) | 0.279723 / 0.275898 (0.003825) | 0.298557 / 0.323480 (-0.024923) | 0.004253 / 0.007986 (-0.003733) | 0.002758 / 0.004328 (-0.001570) | 0.048931 / 0.004250 (0.044680) | 0.044245 / 0.037052 (0.007193) | 0.295876 / 0.258489 (0.037387) | 0.322720 / 0.293841 (0.028879) | 0.046746 / 0.128546 (-0.081800) | 0.010841 / 0.075646 (-0.064805) | 0.058528 / 0.419271 (-0.360744) | 0.034224 / 0.043533 (-0.009308) | 0.279192 / 0.255139 (0.024053) | 0.299775 / 0.283200 (0.016576) | 0.017862 / 0.141683 (-0.123820) | 1.154478 / 1.452155 (-0.297677) | 1.190483 / 1.492716 (-0.302234) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088717 / 0.018006 (0.070710) | 0.297905 / 0.000490 (0.297415) | 0.000209 / 0.000200 (0.000009) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021458 / 0.037411 (-0.015953) | 0.075616 / 0.014526 (0.061090) | 0.087080 / 0.176557 (-0.089476) | 0.125315 / 0.737135 (-0.611821) | 0.088958 / 0.296338 (-0.207381) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287085 / 0.215209 (0.071876) | 2.807798 / 2.077655 (0.730143) | 1.552201 / 1.504120 (0.048081) | 1.422374 / 1.541195 (-0.118820) | 1.437908 / 1.468490 (-0.030582) | 0.569738 / 4.584777 (-4.015039) | 2.493921 / 3.745712 (-1.251791) | 2.648376 / 5.269862 (-2.621486) | 1.741721 / 4.565676 (-2.823955) | 0.063023 / 0.424275 (-0.361253) | 0.005166 / 0.007607 (-0.002441) | 0.336927 / 0.226044 (0.110882) | 3.384517 / 2.268929 (1.115588) | 1.909888 / 55.444624 (-53.534736) | 1.641879 / 6.876477 (-5.234597) | 1.727734 / 2.142072 (-0.414338) | 0.647127 / 4.805227 (-4.158100) | 0.115831 / 6.500664 (-6.384833) | 0.041161 / 0.075469 (-0.034309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.016310 / 1.841788 (-0.825477) | 12.088500 / 8.074308 (4.014192) | 10.799730 / 10.191392 (0.608338) | 0.129049 / 0.680424 (-0.551375) | 0.015379 / 0.534201 (-0.518822) | 0.291352 / 0.579283 (-0.287931) | 0.284579 / 0.434364 (-0.149785) | 0.331214 / 0.540337 (-0.209124) | 0.422902 / 1.386936 (-0.964034) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#991169ed4901d129d0e0ab8d7fccd6a0728da4b8 \"CML watermark\")\n" ]
Support data_dir parameter in push_to_hub
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6634/reactions" }
PR_kwDODunzps5llB9a
{ "diff_url": "https://github.com/huggingface/datasets/pull/6634.diff", "html_url": "https://github.com/huggingface/datasets/pull/6634", "merged_at": "2024-02-05T10:26:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6634.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6634" }
2024-01-31T14:37:36Z
https://api.github.com/repos/huggingface/datasets/issues/6634/comments
Support `data_dir` parameter in `push_to_hub`. This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en".
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6634/timeline
closed
false
6,634
null
2024-02-05T10:26:40Z
null
true
2,110,124,475
https://api.github.com/repos/huggingface/datasets/issues/6633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6633/events
[]
null
2024-01-31T14:05:04Z
[]
https://github.com/huggingface/datasets/pull/6633
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6633). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005172 / 0.011353 (-0.006181) | 0.003694 / 0.011008 (-0.007314) | 0.063098 / 0.038508 (0.024590) | 0.028161 / 0.023109 (0.005052) | 0.262288 / 0.275898 (-0.013610) | 0.281867 / 0.323480 (-0.041613) | 0.004088 / 0.007986 (-0.003898) | 0.002745 / 0.004328 (-0.001583) | 0.049071 / 0.004250 (0.044820) | 0.040629 / 0.037052 (0.003577) | 0.282766 / 0.258489 (0.024277) | 0.297998 / 0.293841 (0.004157) | 0.028057 / 0.128546 (-0.100489) | 0.010878 / 0.075646 (-0.064768) | 0.207410 / 0.419271 (-0.211861) | 0.035600 / 0.043533 (-0.007933) | 0.260157 / 0.255139 (0.005018) | 0.273252 / 0.283200 (-0.009948) | 0.017403 / 0.141683 (-0.124280) | 1.150798 / 1.452155 (-0.301356) | 1.200485 / 1.492716 (-0.292231) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093783 / 0.018006 (0.075777) | 0.302112 / 0.000490 (0.301622) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018254 / 0.037411 (-0.019158) | 0.061083 / 0.014526 (0.046557) | 0.074899 / 0.176557 (-0.101657) | 0.119616 / 0.737135 (-0.617520) | 0.075269 / 0.296338 (-0.221069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275878 / 0.215209 (0.060669) | 2.694778 / 2.077655 (0.617123) | 1.423810 / 1.504120 (-0.080310) | 1.309444 / 1.541195 (-0.231750) | 1.327898 / 1.468490 (-0.140592) | 0.568621 / 4.584777 (-4.016155) | 2.345849 / 3.745712 (-1.399863) | 2.901281 / 5.269862 (-2.368580) | 1.777959 / 4.565676 (-2.787717) | 0.063539 / 0.424275 (-0.360736) | 0.005011 / 0.007607 (-0.002596) | 0.331212 / 0.226044 (0.105168) | 3.200379 / 2.268929 (0.931451) | 1.780766 / 55.444624 (-53.663859) | 1.517178 / 6.876477 (-5.359299) | 1.587307 / 2.142072 (-0.554765) | 0.651939 / 4.805227 (-4.153288) | 0.116646 / 6.500664 (-6.384018) | 0.043325 / 0.075469 (-0.032144) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996894 / 1.841788 (-0.844894) | 11.495397 / 8.074308 (3.421089) | 10.255784 / 10.191392 (0.064392) | 0.129006 / 0.680424 (-0.551418) | 0.013967 / 0.534201 (-0.520234) | 0.284847 / 0.579283 (-0.294436) | 0.265610 / 0.434364 (-0.168754) | 0.320176 / 0.540337 (-0.220162) | 0.429526 / 1.386936 (-0.957410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005582 / 0.011353 (-0.005771) | 0.003867 / 0.011008 (-0.007142) | 0.050416 / 0.038508 (0.011908) | 0.030996 / 0.023109 (0.007887) | 0.275987 / 0.275898 (0.000089) | 0.289487 / 0.323480 (-0.033993) | 0.005149 / 0.007986 (-0.002837) | 0.002806 / 0.004328 (-0.001522) | 0.049617 / 0.004250 (0.045366) | 0.046949 / 0.037052 (0.009897) | 0.281596 / 0.258489 (0.023107) | 0.330948 / 0.293841 (0.037108) | 0.049645 / 0.128546 (-0.078901) | 0.010953 / 0.075646 (-0.064693) | 0.058546 / 0.419271 (-0.360725) | 0.034010 / 0.043533 (-0.009523) | 0.270525 / 0.255139 (0.015386) | 0.289749 / 0.283200 (0.006550) | 0.018755 / 0.141683 (-0.122927) | 1.163072 / 1.452155 (-0.289082) | 1.213400 / 1.492716 (-0.279316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092397 / 0.018006 (0.074390) | 0.299376 / 0.000490 (0.298886) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022496 / 0.037411 (-0.014916) | 0.076886 / 0.014526 (0.062361) | 0.087186 / 0.176557 (-0.089371) | 0.126092 / 0.737135 (-0.611044) | 0.088832 / 0.296338 (-0.207507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288885 / 0.215209 (0.073676) | 2.839851 / 2.077655 (0.762196) | 1.587556 / 1.504120 (0.083436) | 1.470249 / 1.541195 (-0.070945) | 1.518080 / 1.468490 (0.049590) | 0.569646 / 4.584777 (-4.015131) | 2.417574 / 3.745712 (-1.328138) | 2.737368 / 5.269862 (-2.532494) | 1.784419 / 4.565676 (-2.781257) | 0.064104 / 0.424275 (-0.360171) | 0.005138 / 0.007607 (-0.002469) | 0.346214 / 0.226044 (0.120169) | 3.439541 / 2.268929 (1.170612) | 1.944792 / 55.444624 (-53.499832) | 1.675762 / 6.876477 (-5.200714) | 1.851871 / 2.142072 (-0.290201) | 0.652932 / 4.805227 (-4.152295) | 0.118953 / 6.500664 (-6.381711) | 0.041011 / 0.075469 (-0.034459) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017690 / 1.841788 (-0.824098) | 12.610531 / 8.074308 (4.536223) | 11.223165 / 10.191392 (1.031773) | 0.131637 / 0.680424 (-0.548786) | 0.016733 / 0.534201 (-0.517468) | 0.288491 / 0.579283 (-0.290792) | 0.275899 / 0.434364 (-0.158465) | 0.331837 / 0.540337 (-0.208500) | 0.421695 / 1.386936 (-0.965241) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5d9dfa9a8c077c783729a279623926faa9e2f3f1 \"CML watermark\")\n" ]
dataset viewer requires no-script
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6633/reactions" }
PR_kwDODunzps5lknz9
{ "diff_url": "https://github.com/huggingface/datasets/pull/6633.diff", "html_url": "https://github.com/huggingface/datasets/pull/6633", "merged_at": "2024-01-31T13:59:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/6633.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6633" }
2024-01-31T13:41:54Z
https://api.github.com/repos/huggingface/datasets/issues/6633/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
https://api.github.com/repos/huggingface/datasets/issues/6633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6633/timeline
closed
false
6,633
null
2024-01-31T13:59:01Z
null
true
2,108,541,678
https://api.github.com/repos/huggingface/datasets/issues/6632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6632/events
[]
null
2024-02-06T17:27:35Z
[]
https://github.com/huggingface/datasets/pull/6632
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6632). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004913 / 0.011353 (-0.006440) | 0.003595 / 0.011008 (-0.007413) | 0.068385 / 0.038508 (0.029876) | 0.028612 / 0.023109 (0.005503) | 0.236590 / 0.275898 (-0.039308) | 0.261890 / 0.323480 (-0.061590) | 0.003027 / 0.007986 (-0.004958) | 0.002674 / 0.004328 (-0.001654) | 0.049255 / 0.004250 (0.045004) | 0.040500 / 0.037052 (0.003447) | 0.248759 / 0.258489 (-0.009730) | 0.280299 / 0.293841 (-0.013542) | 0.027300 / 0.128546 (-0.101247) | 0.010475 / 0.075646 (-0.065171) | 0.208744 / 0.419271 (-0.210527) | 0.035214 / 0.043533 (-0.008319) | 0.251922 / 0.255139 (-0.003217) | 0.263582 / 0.283200 (-0.019618) | 0.018738 / 0.141683 (-0.122945) | 1.150940 / 1.452155 (-0.301215) | 1.187240 / 1.492716 (-0.305476) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093505 / 0.018006 (0.075499) | 0.301101 / 0.000490 (0.300611) | 0.000232 / 0.000200 (0.000032) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017779 / 0.037411 (-0.019632) | 0.061412 / 0.014526 (0.046886) | 0.074353 / 0.176557 (-0.102203) | 0.118717 / 0.737135 (-0.618418) | 0.074214 / 0.296338 (-0.222125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281722 / 0.215209 (0.066513) | 2.716867 / 2.077655 (0.639212) | 1.423379 / 1.504120 (-0.080741) | 1.315379 / 1.541195 (-0.225816) | 1.294638 / 1.468490 (-0.173852) | 0.549658 / 4.584777 (-4.035119) | 2.349889 / 3.745712 (-1.395823) | 2.722354 / 5.269862 (-2.547507) | 1.700271 / 4.565676 (-2.865406) | 0.061099 / 0.424275 (-0.363176) | 0.004931 / 0.007607 (-0.002677) | 0.339181 / 0.226044 (0.113136) | 3.242467 / 2.268929 (0.973538) | 1.777929 / 55.444624 (-53.666696) | 1.498380 / 6.876477 (-5.378097) | 1.511482 / 2.142072 (-0.630590) | 0.627076 / 4.805227 (-4.178151) | 0.115936 / 6.500664 (-6.384729) | 0.041791 / 0.075469 (-0.033678) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983132 / 1.841788 (-0.858656) | 11.431810 / 8.074308 (3.357502) | 10.298918 / 10.191392 (0.107526) | 0.139754 / 0.680424 (-0.540670) | 0.013984 / 0.534201 (-0.520217) | 0.283627 / 0.579283 (-0.295656) | 0.264970 / 0.434364 (-0.169393) | 0.323896 / 0.540337 (-0.216441) | 0.420132 / 1.386936 (-0.966804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005323 / 0.011353 (-0.006030) | 0.003725 / 0.011008 (-0.007283) | 0.050191 / 0.038508 (0.011683) | 0.032196 / 0.023109 (0.009087) | 0.265037 / 0.275898 (-0.010861) | 0.289573 / 0.323480 (-0.033907) | 0.004345 / 0.007986 (-0.003640) | 0.002794 / 0.004328 (-0.001534) | 0.048955 / 0.004250 (0.044705) | 0.045421 / 0.037052 (0.008369) | 0.279792 / 0.258489 (0.021303) | 0.307374 / 0.293841 (0.013533) | 0.046997 / 0.128546 (-0.081549) | 0.010531 / 0.075646 (-0.065115) | 0.058921 / 0.419271 (-0.360351) | 0.033620 / 0.043533 (-0.009912) | 0.268138 / 0.255139 (0.012999) | 0.285941 / 0.283200 (0.002742) | 0.018396 / 0.141683 (-0.123287) | 1.151089 / 1.452155 (-0.301066) | 1.209351 / 1.492716 (-0.283365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092258 / 0.018006 (0.074252) | 0.300893 / 0.000490 (0.300403) | 0.000212 / 0.000200 (0.000013) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022233 / 0.037411 (-0.015178) | 0.075220 / 0.014526 (0.060694) | 0.085901 / 0.176557 (-0.090656) | 0.125080 / 0.737135 (-0.612056) | 0.086978 / 0.296338 (-0.209361) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292877 / 0.215209 (0.077667) | 2.841005 / 2.077655 (0.763350) | 1.555168 / 1.504120 (0.051048) | 1.420801 / 1.541195 (-0.120394) | 1.431475 / 1.468490 (-0.037015) | 0.569803 / 4.584777 (-4.014974) | 2.451731 / 3.745712 (-1.293981) | 2.662825 / 5.269862 (-2.607036) | 1.732260 / 4.565676 (-2.833416) | 0.063030 / 0.424275 (-0.361245) | 0.004971 / 0.007607 (-0.002637) | 0.345250 / 0.226044 (0.119206) | 3.390909 / 2.268929 (1.121980) | 1.908666 / 55.444624 (-53.535959) | 1.628976 / 6.876477 (-5.247501) | 1.719270 / 2.142072 (-0.422803) | 0.653712 / 4.805227 (-4.151515) | 0.116423 / 6.500664 (-6.384241) | 0.040835 / 0.075469 (-0.034634) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005538 / 1.841788 (-0.836250) | 12.105381 / 8.074308 (4.031073) | 10.656295 / 10.191392 (0.464903) | 0.131850 / 0.680424 (-0.548574) | 0.016297 / 0.534201 (-0.517904) | 0.285566 / 0.579283 (-0.293717) | 0.276086 / 0.434364 (-0.158278) | 0.326663 / 0.540337 (-0.213675) | 0.410639 / 1.386936 (-0.976297) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1dc3f04586ee65c890b74649afc42316121af689 \"CML watermark\")\n" ]
Fix reload cache with data dir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6632/reactions" }
PR_kwDODunzps5lfPuk
{ "diff_url": "https://github.com/huggingface/datasets/pull/6632.diff", "html_url": "https://github.com/huggingface/datasets/pull/6632", "merged_at": "2024-02-06T17:21:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/6632.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6632" }
2024-01-30T18:52:23Z
https://api.github.com/repos/huggingface/datasets/issues/6632/comments
The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`) I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config_id forged from the `config_kwargs` directly close https://github.com/huggingface/datasets/issues/6609
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6632/timeline
closed
false
6,632
null
2024-02-06T17:21:24Z
null
true
2,107,802,473
https://api.github.com/repos/huggingface/datasets/issues/6631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6631/events
[]
null
2024-01-30T15:34:49Z
[]
https://github.com/huggingface/datasets/pull/6631
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6631). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003665 / 0.011008 (-0.007343) | 0.063602 / 0.038508 (0.025094) | 0.029103 / 0.023109 (0.005993) | 0.233133 / 0.275898 (-0.042765) | 0.257000 / 0.323480 (-0.066480) | 0.003059 / 0.007986 (-0.004926) | 0.004007 / 0.004328 (-0.000321) | 0.049804 / 0.004250 (0.045553) | 0.039946 / 0.037052 (0.002893) | 0.248003 / 0.258489 (-0.010486) | 0.272729 / 0.293841 (-0.021112) | 0.027542 / 0.128546 (-0.101004) | 0.010745 / 0.075646 (-0.064901) | 0.207686 / 0.419271 (-0.211586) | 0.035438 / 0.043533 (-0.008095) | 0.236864 / 0.255139 (-0.018275) | 0.258610 / 0.283200 (-0.024590) | 0.017225 / 0.141683 (-0.124458) | 1.130894 / 1.452155 (-0.321261) | 1.171266 / 1.492716 (-0.321450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092532 / 0.018006 (0.074525) | 0.301650 / 0.000490 (0.301161) | 0.000216 / 0.000200 (0.000016) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018175 / 0.037411 (-0.019237) | 0.061538 / 0.014526 (0.047012) | 0.073673 / 0.176557 (-0.102884) | 0.120676 / 0.737135 (-0.616460) | 0.074753 / 0.296338 (-0.221586) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283625 / 0.215209 (0.068416) | 2.794903 / 2.077655 (0.717248) | 1.485149 / 1.504120 (-0.018970) | 1.361154 / 1.541195 (-0.180041) | 1.371436 / 1.468490 (-0.097054) | 0.580401 / 4.584777 (-4.004376) | 2.457068 / 3.745712 (-1.288644) | 2.760878 / 5.269862 (-2.508984) | 1.725507 / 4.565676 (-2.840169) | 0.063632 / 0.424275 (-0.360644) | 0.005036 / 0.007607 (-0.002572) | 0.337167 / 0.226044 (0.111122) | 3.314508 / 2.268929 (1.045579) | 1.863412 / 55.444624 (-53.581213) | 1.621966 / 6.876477 (-5.254511) | 1.600422 / 2.142072 (-0.541651) | 0.647753 / 4.805227 (-4.157475) | 0.117169 / 6.500664 (-6.383495) | 0.042338 / 0.075469 (-0.033131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.981818 / 1.841788 (-0.859969) | 12.044657 / 8.074308 (3.970349) | 10.654091 / 10.191392 (0.462699) | 0.130693 / 0.680424 (-0.549731) | 0.014733 / 0.534201 (-0.519468) | 0.317432 / 0.579283 (-0.261851) | 0.267196 / 0.434364 (-0.167168) | 0.329310 / 0.540337 (-0.211028) | 0.433379 / 1.386936 (-0.953557) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005502 / 0.011353 (-0.005851) | 0.003951 / 0.011008 (-0.007057) | 0.050651 / 0.038508 (0.012143) | 0.031802 / 0.023109 (0.008693) | 0.281384 / 0.275898 (0.005485) | 0.303900 / 0.323480 (-0.019580) | 0.004451 / 0.007986 (-0.003534) | 0.002801 / 0.004328 (-0.001527) | 0.048688 / 0.004250 (0.044438) | 0.044717 / 0.037052 (0.007664) | 0.295017 / 0.258489 (0.036528) | 0.328003 / 0.293841 (0.034162) | 0.048421 / 0.128546 (-0.080125) | 0.011254 / 0.075646 (-0.064392) | 0.058223 / 0.419271 (-0.361048) | 0.033915 / 0.043533 (-0.009618) | 0.279893 / 0.255139 (0.024754) | 0.297605 / 0.283200 (0.014405) | 0.017115 / 0.141683 (-0.124568) | 1.146966 / 1.452155 (-0.305189) | 1.191650 / 1.492716 (-0.301066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092524 / 0.018006 (0.074518) | 0.309332 / 0.000490 (0.308842) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022265 / 0.037411 (-0.015146) | 0.075732 / 0.014526 (0.061206) | 0.087340 / 0.176557 (-0.089217) | 0.126079 / 0.737135 (-0.611056) | 0.090349 / 0.296338 (-0.205990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288882 / 0.215209 (0.073673) | 2.833046 / 2.077655 (0.755392) | 1.602905 / 1.504120 (0.098785) | 1.473110 / 1.541195 (-0.068085) | 1.491300 / 1.468490 (0.022810) | 0.557799 / 4.584777 (-4.026978) | 2.439526 / 3.745712 (-1.306186) | 2.669336 / 5.269862 (-2.600526) | 1.719472 / 4.565676 (-2.846204) | 0.062456 / 0.424275 (-0.361819) | 0.005058 / 0.007607 (-0.002549) | 0.343706 / 0.226044 (0.117662) | 3.422397 / 2.268929 (1.153469) | 1.983679 / 55.444624 (-53.460946) | 1.673784 / 6.876477 (-5.202693) | 1.785144 / 2.142072 (-0.356928) | 0.643127 / 4.805227 (-4.162100) | 0.115254 / 6.500664 (-6.385410) | 0.041235 / 0.075469 (-0.034235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005448 / 1.841788 (-0.836340) | 12.240100 / 8.074308 (4.165792) | 11.051965 / 10.191392 (0.860573) | 0.130438 / 0.680424 (-0.549986) | 0.015918 / 0.534201 (-0.518283) | 0.287468 / 0.579283 (-0.291815) | 0.287699 / 0.434364 (-0.146665) | 0.324561 / 0.540337 (-0.215777) | 0.418820 / 1.386936 (-0.968116) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#237a2a688155e23cfbcdfadd2d491ce1667fa494 \"CML watermark\")\n" ]
Fix filelock: use current umask for filelock >= 3.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6631/reactions" }
PR_kwDODunzps5lcu9A
{ "diff_url": "https://github.com/huggingface/datasets/pull/6631.diff", "html_url": "https://github.com/huggingface/datasets/pull/6631", "merged_at": "2024-01-30T15:28:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/6631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6631" }
2024-01-30T12:56:01Z
https://api.github.com/repos/huggingface/datasets/issues/6631/comments
reported in https://github.com/huggingface/evaluate/issues/542 cc @stas00 @williamberrios close https://github.com/huggingface/datasets/issues/6589
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6631/timeline
closed
false
6,631
null
2024-01-30T15:28:37Z
null
true
2,106,478,275
https://api.github.com/repos/huggingface/datasets/issues/6630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6630/events
[]
null
2024-01-30T16:19:45Z
[]
https://github.com/huggingface/datasets/pull/6630
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6630). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hmm these errors look pretty weird... can they be retried?", "Hi, thanks for working on this! To fix the errors, you also need to update [this file](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/_dill.py) (by adding `version.parse(\"0.3.8\").release` to the lists)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005068 / 0.011353 (-0.006285) | 0.003657 / 0.011008 (-0.007351) | 0.062914 / 0.038508 (0.024406) | 0.027965 / 0.023109 (0.004855) | 0.241804 / 0.275898 (-0.034094) | 0.268069 / 0.323480 (-0.055411) | 0.004066 / 0.007986 (-0.003920) | 0.002704 / 0.004328 (-0.001624) | 0.048745 / 0.004250 (0.044495) | 0.042158 / 0.037052 (0.005106) | 0.257670 / 0.258489 (-0.000819) | 0.279419 / 0.293841 (-0.014422) | 0.027193 / 0.128546 (-0.101353) | 0.010379 / 0.075646 (-0.065267) | 0.207009 / 0.419271 (-0.212262) | 0.035494 / 0.043533 (-0.008039) | 0.246025 / 0.255139 (-0.009114) | 0.265906 / 0.283200 (-0.017294) | 0.017335 / 0.141683 (-0.124348) | 1.134052 / 1.452155 (-0.318103) | 1.184668 / 1.492716 (-0.308049) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093137 / 0.018006 (0.075130) | 0.302279 / 0.000490 (0.301789) | 0.000210 / 0.000200 (0.000010) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018190 / 0.037411 (-0.019221) | 0.061436 / 0.014526 (0.046910) | 0.073102 / 0.176557 (-0.103454) | 0.119782 / 0.737135 (-0.617354) | 0.074292 / 0.296338 (-0.222046) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285905 / 0.215209 (0.070696) | 2.809051 / 2.077655 (0.731397) | 1.470305 / 1.504120 (-0.033814) | 1.350457 / 1.541195 (-0.190738) | 1.349111 / 1.468490 (-0.119379) | 0.568277 / 4.584777 (-4.016500) | 2.353046 / 3.745712 (-1.392666) | 2.805862 / 5.269862 (-2.463999) | 1.750275 / 4.565676 (-2.815401) | 0.062370 / 0.424275 (-0.361905) | 0.004954 / 0.007607 (-0.002653) | 0.335609 / 0.226044 (0.109564) | 3.367200 / 2.268929 (1.098271) | 1.829431 / 55.444624 (-53.615193) | 1.545093 / 6.876477 (-5.331384) | 1.571107 / 2.142072 (-0.570966) | 0.640279 / 4.805227 (-4.164949) | 0.116209 / 6.500664 (-6.384455) | 0.042308 / 0.075469 (-0.033161) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982972 / 1.841788 (-0.858816) | 11.424370 / 8.074308 (3.350062) | 10.427111 / 10.191392 (0.235719) | 0.129477 / 0.680424 (-0.550946) | 0.014166 / 0.534201 (-0.520035) | 0.287597 / 0.579283 (-0.291686) | 0.265588 / 0.434364 (-0.168776) | 0.324007 / 0.540337 (-0.216330) | 0.430766 / 1.386936 (-0.956170) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005347 / 0.011353 (-0.006005) | 0.003733 / 0.011008 (-0.007275) | 0.049520 / 0.038508 (0.011011) | 0.031177 / 0.023109 (0.008068) | 0.281854 / 0.275898 (0.005956) | 0.300937 / 0.323480 (-0.022543) | 0.004385 / 0.007986 (-0.003601) | 0.002841 / 0.004328 (-0.001488) | 0.048661 / 0.004250 (0.044411) | 0.044258 / 0.037052 (0.007205) | 0.295651 / 0.258489 (0.037162) | 0.322872 / 0.293841 (0.029031) | 0.048924 / 0.128546 (-0.079622) | 0.010742 / 0.075646 (-0.064905) | 0.059327 / 0.419271 (-0.359944) | 0.033938 / 0.043533 (-0.009595) | 0.282235 / 0.255139 (0.027096) | 0.297432 / 0.283200 (0.014233) | 0.018295 / 0.141683 (-0.123388) | 1.164459 / 1.452155 (-0.287696) | 1.214511 / 1.492716 (-0.278205) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091441 / 0.018006 (0.073435) | 0.303023 / 0.000490 (0.302533) | 0.000211 / 0.000200 (0.000011) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022024 / 0.037411 (-0.015388) | 0.075570 / 0.014526 (0.061044) | 0.086761 / 0.176557 (-0.089796) | 0.126437 / 0.737135 (-0.610698) | 0.088354 / 0.296338 (-0.207984) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.289360 / 0.215209 (0.074151) | 2.816433 / 2.077655 (0.738779) | 1.561442 / 1.504120 (0.057322) | 1.438168 / 1.541195 (-0.103027) | 1.453398 / 1.468490 (-0.015092) | 0.579474 / 4.584777 (-4.005303) | 2.458640 / 3.745712 (-1.287072) | 2.638572 / 5.269862 (-2.631290) | 1.725218 / 4.565676 (-2.840458) | 0.063550 / 0.424275 (-0.360725) | 0.005220 / 0.007607 (-0.002387) | 0.338883 / 0.226044 (0.112838) | 3.353585 / 2.268929 (1.084656) | 1.913186 / 55.444624 (-53.531438) | 1.667445 / 6.876477 (-5.209032) | 1.740085 / 2.142072 (-0.401987) | 0.646369 / 4.805227 (-4.158859) | 0.116737 / 6.500664 (-6.383927) | 0.041052 / 0.075469 (-0.034417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.023180 / 1.841788 (-0.818608) | 12.078398 / 8.074308 (4.004090) | 10.952012 / 10.191392 (0.760620) | 0.131335 / 0.680424 (-0.549089) | 0.015701 / 0.534201 (-0.518499) | 0.289709 / 0.579283 (-0.289574) | 0.270495 / 0.434364 (-0.163869) | 0.331773 / 0.540337 (-0.208565) | 0.417660 / 1.386936 (-0.969276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b21d74f5c0ab8a85838af04de8ad85e71b0ac4f \"CML watermark\")\n" ]
Bump max range of dill to 0.3.8
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6630/reactions" }
PR_kwDODunzps5lYPi3
{ "diff_url": "https://github.com/huggingface/datasets/pull/6630.diff", "html_url": "https://github.com/huggingface/datasets/pull/6630", "merged_at": "2024-01-30T15:12:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/6630.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6630" }
2024-01-29T21:35:55Z
https://api.github.com/repos/huggingface/datasets/issues/6630/comments
Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history
{ "avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4", "events_url": "https://api.github.com/users/ringohoffman/events{/privacy}", "followers_url": "https://api.github.com/users/ringohoffman/followers", "following_url": "https://api.github.com/users/ringohoffman/following{/other_user}", "gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ringohoffman", "id": 27844407, "login": "ringohoffman", "node_id": "MDQ6VXNlcjI3ODQ0NDA3", "organizations_url": "https://api.github.com/users/ringohoffman/orgs", "received_events_url": "https://api.github.com/users/ringohoffman/received_events", "repos_url": "https://api.github.com/users/ringohoffman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions", "type": "User", "url": "https://api.github.com/users/ringohoffman" }
https://api.github.com/repos/huggingface/datasets/issues/6630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6630/timeline
closed
false
6,630
null
2024-01-30T15:12:25Z
null
true
2,105,774,482
https://api.github.com/repos/huggingface/datasets/issues/6629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6629/events
[]
null
2024-02-05T12:35:43Z
[]
https://github.com/huggingface/datasets/pull/6629
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6629). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the next release.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005222 / 0.011353 (-0.006131) | 0.003621 / 0.011008 (-0.007387) | 0.063091 / 0.038508 (0.024583) | 0.029395 / 0.023109 (0.006285) | 0.231445 / 0.275898 (-0.044453) | 0.256716 / 0.323480 (-0.066764) | 0.004905 / 0.007986 (-0.003081) | 0.002703 / 0.004328 (-0.001625) | 0.048526 / 0.004250 (0.044276) | 0.041382 / 0.037052 (0.004330) | 0.247468 / 0.258489 (-0.011021) | 0.270670 / 0.293841 (-0.023171) | 0.028088 / 0.128546 (-0.100458) | 0.010661 / 0.075646 (-0.064985) | 0.205812 / 0.419271 (-0.213459) | 0.035880 / 0.043533 (-0.007653) | 0.237310 / 0.255139 (-0.017829) | 0.255440 / 0.283200 (-0.027760) | 0.018334 / 0.141683 (-0.123349) | 1.128815 / 1.452155 (-0.323340) | 1.204771 / 1.492716 (-0.287945) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089175 / 0.018006 (0.071169) | 0.298584 / 0.000490 (0.298095) | 0.000206 / 0.000200 (0.000006) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018532 / 0.037411 (-0.018880) | 0.061158 / 0.014526 (0.046632) | 0.074177 / 0.176557 (-0.102380) | 0.119408 / 0.737135 (-0.617728) | 0.073821 / 0.296338 (-0.222518) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.277630 / 0.215209 (0.062420) | 2.735038 / 2.077655 (0.657383) | 1.437251 / 1.504120 (-0.066868) | 1.304596 / 1.541195 (-0.236598) | 1.316830 / 1.468490 (-0.151661) | 0.551057 / 4.584777 (-4.033720) | 2.337247 / 3.745712 (-1.408465) | 2.761501 / 5.269862 (-2.508361) | 1.729000 / 4.565676 (-2.836677) | 0.069398 / 0.424275 (-0.354877) | 0.005059 / 0.007607 (-0.002548) | 0.359594 / 0.226044 (0.133550) | 3.283325 / 2.268929 (1.014397) | 1.777410 / 55.444624 (-53.667214) | 1.518522 / 6.876477 (-5.357954) | 1.546712 / 2.142072 (-0.595361) | 0.627047 / 4.805227 (-4.178180) | 0.117058 / 6.500664 (-6.383606) | 0.043437 / 0.075469 (-0.032032) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.056303 / 1.841788 (-0.785484) | 11.552295 / 8.074308 (3.477987) | 10.184582 / 10.191392 (-0.006810) | 0.129061 / 0.680424 (-0.551363) | 0.014093 / 0.534201 (-0.520108) | 0.292268 / 0.579283 (-0.287015) | 0.264750 / 0.434364 (-0.169614) | 0.334770 / 0.540337 (-0.205567) | 0.436749 / 1.386936 (-0.950187) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005408 / 0.011353 (-0.005945) | 0.003650 / 0.011008 (-0.007358) | 0.054263 / 0.038508 (0.015755) | 0.031112 / 0.023109 (0.008003) | 0.270582 / 0.275898 (-0.005316) | 0.303506 / 0.323480 (-0.019974) | 0.004351 / 0.007986 (-0.003635) | 0.002654 / 0.004328 (-0.001674) | 0.049631 / 0.004250 (0.045381) | 0.045209 / 0.037052 (0.008156) | 0.284992 / 0.258489 (0.026503) | 0.316653 / 0.293841 (0.022812) | 0.049526 / 0.128546 (-0.079020) | 0.010696 / 0.075646 (-0.064951) | 0.057859 / 0.419271 (-0.361413) | 0.034227 / 0.043533 (-0.009306) | 0.269656 / 0.255139 (0.014517) | 0.288766 / 0.283200 (0.005567) | 0.017892 / 0.141683 (-0.123791) | 1.167492 / 1.452155 (-0.284662) | 1.217263 / 1.492716 (-0.275454) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089306 / 0.018006 (0.071299) | 0.300774 / 0.000490 (0.300284) | 0.000198 / 0.000200 (-0.000002) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022050 / 0.037411 (-0.015361) | 0.076781 / 0.014526 (0.062255) | 0.086597 / 0.176557 (-0.089959) | 0.125094 / 0.737135 (-0.612042) | 0.089412 / 0.296338 (-0.206927) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287444 / 0.215209 (0.072235) | 2.830047 / 2.077655 (0.752392) | 1.567492 / 1.504120 (0.063372) | 1.439875 / 1.541195 (-0.101320) | 1.461699 / 1.468490 (-0.006791) | 0.569595 / 4.584777 (-4.015182) | 2.454391 / 3.745712 (-1.291322) | 2.655829 / 5.269862 (-2.614032) | 1.756122 / 4.565676 (-2.809554) | 0.063333 / 0.424275 (-0.360942) | 0.005086 / 0.007607 (-0.002521) | 0.351210 / 0.226044 (0.125166) | 3.375545 / 2.268929 (1.106617) | 1.945367 / 55.444624 (-53.499258) | 1.662635 / 6.876477 (-5.213841) | 1.762859 / 2.142072 (-0.379213) | 0.651889 / 4.805227 (-4.153339) | 0.118341 / 6.500664 (-6.382323) | 0.040897 / 0.075469 (-0.034572) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005270 / 1.841788 (-0.836518) | 12.247847 / 8.074308 (4.173539) | 10.828131 / 10.191392 (0.636739) | 0.129741 / 0.680424 (-0.550683) | 0.015184 / 0.534201 (-0.519017) | 0.295440 / 0.579283 (-0.283843) | 0.276759 / 0.434364 (-0.157605) | 0.329046 / 0.540337 (-0.211291) | 0.421750 / 1.386936 (-0.965186) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea261ddc295527d0c1cd9f90fb61668f14135608 \"CML watermark\")\n" ]
Support push_to_hub without org/user to default to logged-in user
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6629/reactions" }
PR_kwDODunzps5lV0aF
{ "diff_url": "https://github.com/huggingface/datasets/pull/6629.diff", "html_url": "https://github.com/huggingface/datasets/pull/6629", "merged_at": "2024-02-05T12:29:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/6629.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6629" }
2024-01-29T15:36:52Z
https://api.github.com/repos/huggingface/datasets/issues/6629/comments
This behavior is aligned with: - the behavior of `datasets` before merging #6519 - the behavior described in the corresponding docstring - the behavior of `huggingface_hub.create_repo` Revert "Support push_to_hub canonical datasets (#6519)" - This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541. Fix #6597.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6629/timeline
closed
false
6,629
null
2024-02-05T12:29:36Z
null
true
2,105,760,502
https://api.github.com/repos/huggingface/datasets/issues/6628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6628/events
[]
null
2024-02-05T10:29:20Z
[]
https://github.com/huggingface/datasets/pull/6628
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6628). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@huggingface/datasets, feel free to review this PR so that it can be included in the next release.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004907 / 0.011353 (-0.006446) | 0.003200 / 0.011008 (-0.007808) | 0.062601 / 0.038508 (0.024093) | 0.028607 / 0.023109 (0.005498) | 0.242688 / 0.275898 (-0.033210) | 0.263754 / 0.323480 (-0.059726) | 0.003084 / 0.007986 (-0.004901) | 0.002744 / 0.004328 (-0.001585) | 0.048686 / 0.004250 (0.044436) | 0.040734 / 0.037052 (0.003682) | 0.262585 / 0.258489 (0.004096) | 0.282822 / 0.293841 (-0.011019) | 0.027470 / 0.128546 (-0.101076) | 0.010356 / 0.075646 (-0.065290) | 0.206397 / 0.419271 (-0.212874) | 0.035440 / 0.043533 (-0.008093) | 0.248599 / 0.255139 (-0.006540) | 0.268869 / 0.283200 (-0.014331) | 0.018542 / 0.141683 (-0.123141) | 1.128139 / 1.452155 (-0.324016) | 1.172115 / 1.492716 (-0.320602) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.107939 / 0.018006 (0.089933) | 0.301801 / 0.000490 (0.301311) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018505 / 0.037411 (-0.018906) | 0.061350 / 0.014526 (0.046824) | 0.072645 / 0.176557 (-0.103912) | 0.119459 / 0.737135 (-0.617676) | 0.074711 / 0.296338 (-0.221628) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275132 / 0.215209 (0.059922) | 2.714936 / 2.077655 (0.637281) | 1.434204 / 1.504120 (-0.069916) | 1.328358 / 1.541195 (-0.212837) | 1.320706 / 1.468490 (-0.147784) | 0.555723 / 4.584777 (-4.029054) | 2.401335 / 3.745712 (-1.344378) | 2.765609 / 5.269862 (-2.504253) | 1.715207 / 4.565676 (-2.850470) | 0.074990 / 0.424275 (-0.349285) | 0.004999 / 0.007607 (-0.002608) | 0.328435 / 0.226044 (0.102390) | 3.254945 / 2.268929 (0.986017) | 1.781105 / 55.444624 (-53.663519) | 1.509491 / 6.876477 (-5.366985) | 1.520670 / 2.142072 (-0.621402) | 0.636411 / 4.805227 (-4.168817) | 0.115616 / 6.500664 (-6.385048) | 0.041633 / 0.075469 (-0.033836) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975462 / 1.841788 (-0.866326) | 11.480359 / 8.074308 (3.406051) | 10.528665 / 10.191392 (0.337273) | 0.141323 / 0.680424 (-0.539100) | 0.013510 / 0.534201 (-0.520691) | 0.293570 / 0.579283 (-0.285713) | 0.259956 / 0.434364 (-0.174408) | 0.331440 / 0.540337 (-0.208898) | 0.453487 / 1.386936 (-0.933449) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005278 / 0.011353 (-0.006075) | 0.003400 / 0.011008 (-0.007608) | 0.049442 / 0.038508 (0.010934) | 0.031738 / 0.023109 (0.008628) | 0.292334 / 0.275898 (0.016436) | 0.308931 / 0.323480 (-0.014549) | 0.004290 / 0.007986 (-0.003696) | 0.002738 / 0.004328 (-0.001591) | 0.048944 / 0.004250 (0.044694) | 0.044273 / 0.037052 (0.007221) | 0.301434 / 0.258489 (0.042945) | 0.333067 / 0.293841 (0.039226) | 0.048741 / 0.128546 (-0.079805) | 0.010357 / 0.075646 (-0.065289) | 0.057777 / 0.419271 (-0.361495) | 0.033892 / 0.043533 (-0.009641) | 0.286921 / 0.255139 (0.031782) | 0.306204 / 0.283200 (0.023005) | 0.018764 / 0.141683 (-0.122919) | 1.142000 / 1.452155 (-0.310155) | 1.206728 / 1.492716 (-0.285988) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094233 / 0.018006 (0.076227) | 0.302553 / 0.000490 (0.302063) | 0.000213 / 0.000200 (0.000013) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021814 / 0.037411 (-0.015598) | 0.075143 / 0.014526 (0.060617) | 0.087717 / 0.176557 (-0.088840) | 0.126079 / 0.737135 (-0.611056) | 0.089083 / 0.296338 (-0.207255) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293844 / 0.215209 (0.078635) | 2.859481 / 2.077655 (0.781827) | 1.580366 / 1.504120 (0.076246) | 1.462633 / 1.541195 (-0.078562) | 1.471052 / 1.468490 (0.002562) | 0.574755 / 4.584777 (-4.010022) | 2.408925 / 3.745712 (-1.336787) | 2.673618 / 5.269862 (-2.596243) | 1.746218 / 4.565676 (-2.819459) | 0.063435 / 0.424275 (-0.360840) | 0.005023 / 0.007607 (-0.002584) | 0.341990 / 0.226044 (0.115946) | 3.430862 / 2.268929 (1.161933) | 1.953869 / 55.444624 (-53.490755) | 1.661276 / 6.876477 (-5.215201) | 1.761575 / 2.142072 (-0.380498) | 0.656388 / 4.805227 (-4.148839) | 0.117774 / 6.500664 (-6.382890) | 0.040290 / 0.075469 (-0.035179) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.004315 / 1.841788 (-0.837473) | 12.249719 / 8.074308 (4.175411) | 10.942703 / 10.191392 (0.751311) | 0.128552 / 0.680424 (-0.551872) | 0.015958 / 0.534201 (-0.518242) | 0.287330 / 0.579283 (-0.291953) | 0.274336 / 0.434364 (-0.160028) | 0.326233 / 0.540337 (-0.214104) | 0.414548 / 1.386936 (-0.972388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#db47d6d95c5346368710d3c852f20ffc1b0f1c1c \"CML watermark\")\n" ]
Make CLI test support multi-processing
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6628/reactions" }
PR_kwDODunzps5lVxXU
{ "diff_url": "https://github.com/huggingface/datasets/pull/6628.diff", "html_url": "https://github.com/huggingface/datasets/pull/6628", "merged_at": "2024-02-05T10:23:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6628" }
2024-01-29T15:30:09Z
https://api.github.com/repos/huggingface/datasets/issues/6628/comments
Support passing `--num_proc` to CLI test. This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6628/timeline
closed
false
6,628
null
2024-02-05T10:23:13Z
null
true
2,105,735,816
https://api.github.com/repos/huggingface/datasets/issues/6627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6627/events
[]
null
2024-01-29T15:47:34Z
[]
https://github.com/huggingface/datasets/pull/6627
COLLABORATOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6627). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004944 / 0.011353 (-0.006409) | 0.003279 / 0.011008 (-0.007729) | 0.063041 / 0.038508 (0.024533) | 0.029888 / 0.023109 (0.006779) | 0.259138 / 0.275898 (-0.016760) | 0.276907 / 0.323480 (-0.046573) | 0.004015 / 0.007986 (-0.003970) | 0.002647 / 0.004328 (-0.001682) | 0.048944 / 0.004250 (0.044693) | 0.039412 / 0.037052 (0.002360) | 0.278069 / 0.258489 (0.019580) | 0.299139 / 0.293841 (0.005298) | 0.027272 / 0.128546 (-0.101274) | 0.010445 / 0.075646 (-0.065202) | 0.206925 / 0.419271 (-0.212347) | 0.035589 / 0.043533 (-0.007944) | 0.256805 / 0.255139 (0.001666) | 0.275128 / 0.283200 (-0.008072) | 0.017888 / 0.141683 (-0.123795) | 1.136983 / 1.452155 (-0.315172) | 1.167495 / 1.492716 (-0.325222) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088167 / 0.018006 (0.070161) | 0.297360 / 0.000490 (0.296871) | 0.000231 / 0.000200 (0.000031) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018114 / 0.037411 (-0.019297) | 0.061217 / 0.014526 (0.046691) | 0.072269 / 0.176557 (-0.104288) | 0.120607 / 0.737135 (-0.616528) | 0.073517 / 0.296338 (-0.222822) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282580 / 0.215209 (0.067371) | 2.758650 / 2.077655 (0.680995) | 1.425125 / 1.504120 (-0.078995) | 1.303182 / 1.541195 (-0.238013) | 1.341035 / 1.468490 (-0.127455) | 0.549485 / 4.584777 (-4.035292) | 2.346297 / 3.745712 (-1.399415) | 2.686457 / 5.269862 (-2.583405) | 1.684789 / 4.565676 (-2.880888) | 0.061279 / 0.424275 (-0.362996) | 0.004902 / 0.007607 (-0.002705) | 0.333089 / 0.226044 (0.107044) | 3.297016 / 2.268929 (1.028087) | 1.765614 / 55.444624 (-53.679010) | 1.499314 / 6.876477 (-5.377162) | 1.501275 / 2.142072 (-0.640797) | 0.619039 / 4.805227 (-4.186189) | 0.114284 / 6.500664 (-6.386380) | 0.041481 / 0.075469 (-0.033988) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.973924 / 1.841788 (-0.867863) | 11.268266 / 8.074308 (3.193958) | 10.304738 / 10.191392 (0.113346) | 0.129297 / 0.680424 (-0.551127) | 0.014894 / 0.534201 (-0.519307) | 0.287658 / 0.579283 (-0.291626) | 0.266476 / 0.434364 (-0.167888) | 0.322199 / 0.540337 (-0.218138) | 0.419568 / 1.386936 (-0.967368) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005220 / 0.011353 (-0.006133) | 0.003310 / 0.011008 (-0.007698) | 0.049707 / 0.038508 (0.011199) | 0.031148 / 0.023109 (0.008039) | 0.284644 / 0.275898 (0.008746) | 0.302767 / 0.323480 (-0.020712) | 0.004245 / 0.007986 (-0.003740) | 0.002677 / 0.004328 (-0.001651) | 0.049870 / 0.004250 (0.045620) | 0.043922 / 0.037052 (0.006870) | 0.294955 / 0.258489 (0.036466) | 0.322144 / 0.293841 (0.028303) | 0.047211 / 0.128546 (-0.081336) | 0.010492 / 0.075646 (-0.065155) | 0.058152 / 0.419271 (-0.361120) | 0.033508 / 0.043533 (-0.010025) | 0.281266 / 0.255139 (0.026127) | 0.300010 / 0.283200 (0.016810) | 0.017616 / 0.141683 (-0.124067) | 1.124658 / 1.452155 (-0.327496) | 1.167222 / 1.492716 (-0.325495) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089085 / 0.018006 (0.071079) | 0.297912 / 0.000490 (0.297423) | 0.000211 / 0.000200 (0.000011) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021669 / 0.037411 (-0.015742) | 0.075648 / 0.014526 (0.061123) | 0.086054 / 0.176557 (-0.090503) | 0.125236 / 0.737135 (-0.611899) | 0.088146 / 0.296338 (-0.208192) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295238 / 0.215209 (0.080029) | 2.870002 / 2.077655 (0.792347) | 1.582534 / 1.504120 (0.078414) | 1.466710 / 1.541195 (-0.074485) | 1.475352 / 1.468490 (0.006861) | 0.554745 / 4.584777 (-4.030032) | 2.412533 / 3.745712 (-1.333179) | 2.583863 / 5.269862 (-2.685999) | 1.689124 / 4.565676 (-2.876552) | 0.061353 / 0.424275 (-0.362922) | 0.005015 / 0.007607 (-0.002592) | 0.338733 / 0.226044 (0.112688) | 3.356710 / 2.268929 (1.087781) | 1.932143 / 55.444624 (-53.512481) | 1.660081 / 6.876477 (-5.216396) | 1.764961 / 2.142072 (-0.377111) | 0.640002 / 4.805227 (-4.165225) | 0.115251 / 6.500664 (-6.385413) | 0.040627 / 0.075469 (-0.034842) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992296 / 1.841788 (-0.849492) | 11.821259 / 8.074308 (3.746951) | 10.715570 / 10.191392 (0.524178) | 0.142934 / 0.680424 (-0.537489) | 0.015680 / 0.534201 (-0.518521) | 0.287435 / 0.579283 (-0.291848) | 0.276817 / 0.434364 (-0.157547) | 0.327823 / 0.540337 (-0.212515) | 0.413404 / 1.386936 (-0.973532) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#82c78b614d34ee42180d35a882875a28d6281db0 \"CML watermark\")\n" ]
Disable `tqdm` bars in non-interactive environments
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6627/reactions" }
PR_kwDODunzps5lVr-t
{ "diff_url": "https://github.com/huggingface/datasets/pull/6627.diff", "html_url": "https://github.com/huggingface/datasets/pull/6627", "merged_at": "2024-01-29T15:41:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6627.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6627" }
2024-01-29T15:18:21Z
https://api.github.com/repos/huggingface/datasets/issues/6627/comments
Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default). For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6627/timeline
closed
false
6,627
null
2024-01-29T15:41:32Z
null
true
2,105,482,522
https://api.github.com/repos/huggingface/datasets/issues/6626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6626/events
[]
null
2024-01-29T15:18:25Z
[]
https://github.com/huggingface/datasets/pull/6626
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6626). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005085 / 0.011353 (-0.006268) | 0.003592 / 0.011008 (-0.007417) | 0.062591 / 0.038508 (0.024083) | 0.031063 / 0.023109 (0.007954) | 0.247029 / 0.275898 (-0.028869) | 0.273706 / 0.323480 (-0.049774) | 0.004034 / 0.007986 (-0.003951) | 0.002672 / 0.004328 (-0.001657) | 0.048407 / 0.004250 (0.044156) | 0.049229 / 0.037052 (0.012177) | 0.264316 / 0.258489 (0.005827) | 0.284953 / 0.293841 (-0.008888) | 0.027712 / 0.128546 (-0.100834) | 0.010619 / 0.075646 (-0.065027) | 0.210017 / 0.419271 (-0.209254) | 0.035636 / 0.043533 (-0.007897) | 0.252830 / 0.255139 (-0.002309) | 0.278772 / 0.283200 (-0.004428) | 0.017356 / 0.141683 (-0.124326) | 1.140202 / 1.452155 (-0.311953) | 1.204807 / 1.492716 (-0.287909) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.089130 / 0.018006 (0.071123) | 0.300115 / 0.000490 (0.299626) | 0.000213 / 0.000200 (0.000013) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018352 / 0.037411 (-0.019059) | 0.061431 / 0.014526 (0.046905) | 0.073911 / 0.176557 (-0.102646) | 0.121230 / 0.737135 (-0.615906) | 0.074867 / 0.296338 (-0.221471) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282272 / 0.215209 (0.067063) | 2.737413 / 2.077655 (0.659759) | 1.446651 / 1.504120 (-0.057469) | 1.319686 / 1.541195 (-0.221508) | 1.327479 / 1.468490 (-0.141011) | 0.558003 / 4.584777 (-4.026774) | 2.361623 / 3.745712 (-1.384089) | 2.770436 / 5.269862 (-2.499425) | 1.703450 / 4.565676 (-2.862227) | 0.062034 / 0.424275 (-0.362241) | 0.005070 / 0.007607 (-0.002537) | 0.337265 / 0.226044 (0.111221) | 3.299438 / 2.268929 (1.030509) | 1.781273 / 55.444624 (-53.663351) | 1.512743 / 6.876477 (-5.363734) | 1.530995 / 2.142072 (-0.611077) | 0.630210 / 4.805227 (-4.175017) | 0.116219 / 6.500664 (-6.384445) | 0.042220 / 0.075469 (-0.033249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.946341 / 1.841788 (-0.895446) | 11.462179 / 8.074308 (3.387871) | 10.603314 / 10.191392 (0.411922) | 0.128826 / 0.680424 (-0.551598) | 0.013994 / 0.534201 (-0.520207) | 0.288142 / 0.579283 (-0.291141) | 0.266941 / 0.434364 (-0.167422) | 0.329392 / 0.540337 (-0.210946) | 0.431720 / 1.386936 (-0.955216) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005303 / 0.011353 (-0.006050) | 0.003587 / 0.011008 (-0.007422) | 0.049437 / 0.038508 (0.010929) | 0.031940 / 0.023109 (0.008831) | 0.276651 / 0.275898 (0.000752) | 0.297240 / 0.323480 (-0.026240) | 0.004202 / 0.007986 (-0.003784) | 0.002709 / 0.004328 (-0.001619) | 0.048647 / 0.004250 (0.044397) | 0.044147 / 0.037052 (0.007095) | 0.291171 / 0.258489 (0.032682) | 0.319297 / 0.293841 (0.025456) | 0.048167 / 0.128546 (-0.080379) | 0.010630 / 0.075646 (-0.065016) | 0.058402 / 0.419271 (-0.360869) | 0.033817 / 0.043533 (-0.009716) | 0.300546 / 0.255139 (0.045407) | 0.319396 / 0.283200 (0.036197) | 0.017736 / 0.141683 (-0.123946) | 1.159590 / 1.452155 (-0.292565) | 1.191778 / 1.492716 (-0.300939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.088971 / 0.018006 (0.070965) | 0.299721 / 0.000490 (0.299231) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021895 / 0.037411 (-0.015516) | 0.075388 / 0.014526 (0.060862) | 0.087446 / 0.176557 (-0.089111) | 0.126339 / 0.737135 (-0.610796) | 0.089329 / 0.296338 (-0.207010) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296642 / 0.215209 (0.081433) | 2.916023 / 2.077655 (0.838368) | 1.593180 / 1.504120 (0.089060) | 1.470491 / 1.541195 (-0.070704) | 1.485713 / 1.468490 (0.017223) | 0.577204 / 4.584777 (-4.007573) | 2.436463 / 3.745712 (-1.309249) | 2.651004 / 5.269862 (-2.618858) | 1.754026 / 4.565676 (-2.811651) | 0.064943 / 0.424275 (-0.359332) | 0.005115 / 0.007607 (-0.002492) | 0.362082 / 0.226044 (0.136038) | 3.498198 / 2.268929 (1.229270) | 1.951936 / 55.444624 (-53.492688) | 1.682027 / 6.876477 (-5.194450) | 1.751768 / 2.142072 (-0.390304) | 0.668479 / 4.805227 (-4.136748) | 0.119934 / 6.500664 (-6.380730) | 0.041419 / 0.075469 (-0.034050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978145 / 1.841788 (-0.863643) | 11.984984 / 8.074308 (3.910676) | 10.732377 / 10.191392 (0.540985) | 0.141868 / 0.680424 (-0.538555) | 0.015256 / 0.534201 (-0.518945) | 0.288488 / 0.579283 (-0.290795) | 0.276091 / 0.434364 (-0.158273) | 0.330429 / 0.540337 (-0.209908) | 0.423964 / 1.386936 (-0.962972) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bb8497b9dec2a3807c887b8184f902d1d8d7c25a \"CML watermark\")\n" ]
Raise error on bad split name
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6626/reactions" }
PR_kwDODunzps5lU0I2
{ "diff_url": "https://github.com/huggingface/datasets/pull/6626.diff", "html_url": "https://github.com/huggingface/datasets/pull/6626", "merged_at": "2024-01-29T15:12:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/6626.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6626" }
2024-01-29T13:17:41Z
https://api.github.com/repos/huggingface/datasets/issues/6626/comments
e.g. dashes '-' are not allowed in split names This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test cc @AndreaFrancis
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6626/timeline
closed
false
6,626
null
2024-01-29T15:12:18Z
null
true
2,103,950,718
https://api.github.com/repos/huggingface/datasets/issues/6624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6624/events
[]
null
2024-02-06T09:43:31Z
[]
https://github.com/huggingface/datasets/issues/6624
NONE
not_planned
null
null
[ "Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it." ]
How to download the laion-coco dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6624/reactions" }
I_kwDODunzps59Z71-
null
2024-01-28T03:56:05Z
https://api.github.com/repos/huggingface/datasets/issues/6624/comments
The laion coco dataset is not available now. How to download it https://huggingface.co/datasets/laion/laion-coco
{ "avatar_url": "https://avatars.githubusercontent.com/u/15981416?v=4", "events_url": "https://api.github.com/users/vanpersie32/events{/privacy}", "followers_url": "https://api.github.com/users/vanpersie32/followers", "following_url": "https://api.github.com/users/vanpersie32/following{/other_user}", "gists_url": "https://api.github.com/users/vanpersie32/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vanpersie32", "id": 15981416, "login": "vanpersie32", "node_id": "MDQ6VXNlcjE1OTgxNDE2", "organizations_url": "https://api.github.com/users/vanpersie32/orgs", "received_events_url": "https://api.github.com/users/vanpersie32/received_events", "repos_url": "https://api.github.com/users/vanpersie32/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vanpersie32/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vanpersie32/subscriptions", "type": "User", "url": "https://api.github.com/users/vanpersie32" }
https://api.github.com/repos/huggingface/datasets/issues/6624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6624/timeline
closed
false
6,624
null
2024-02-06T09:43:31Z
null
false
2,103,870,123
https://api.github.com/repos/huggingface/datasets/issues/6623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6623/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-03-08T14:27:08Z
[]
https://github.com/huggingface/datasets/issues/6623
NONE
null
null
null
[ "@mariosasko, @lhoestq, @albertvillanova\r\nhey guys! can anyone help? or can you guys suggest who can help with this?", "Hi ! \r\n\r\n1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. It might require the datasets to provide the number of examples per shard though, so that we can know when to stop.\r\n2. Samplers are not compatible with IterableDatasets in pytorch\r\n3. if `dataset.n_shards % world_size != 0` then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of `world_size` so that each example goes to one exactly one GPU.\r\n4. no, sharding should be down up-front and can take some time depending on the dataset size and format", "> if dataset.n_shards % world_size != 0 then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of world_size so that each example goes to one exactly one GPU.\r\n\r\nconsidering there's just 1 shard and 2 worker nodes, do you mean each worker node will load the whole dataset but still receive half of that shard while streaming?", "Yes both nodes will stream from the 1 shard, but each node will skip half of the examples. This way in total each example is seen once and exactly once during you distributed training.\r\n\r\nThough it terms of I/O, the dataset is effectively read/streamed twice.", "what if the number of samples in that shard % num_nodes != 0? it will break/get stuck? or is the data repeated in that case for gradient sync?", "In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.\r\n\r\nIn the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way all the nodes would only have full batches.", "> In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.\r\n> \r\n> In the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way all the nodes would only have full batches.\r\n\r\nIs there any method to modify one dataset's n_shard? modify the number of files is ok? one file == one shard?", "> modify the number of files is ok? one file == one shard?\r\n\r\nYep, one file == one shard :)" ]
streaming datasets doesn't work properly with multi-node
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6623/reactions" }
I_kwDODunzps59ZoKr
null
2024-01-27T23:46:13Z
https://api.github.com/repos/huggingface/datasets/issues/6623/comments
### Feature request Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it. Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitted, I don’t have to use `DistributedSampler` (also they don't work with iterable datasets anyway)? But in this case I noticed that the: First iteraton: first GPU will get → [1, 2] first GPU will get → [3, 4] Second iteraton: first GPU will get → [5] first GPU will get → Nothing which actually creates an issue since in case of `DistributedSampler`, the samples are repeated internally to ensure non of the GPUs at any iteration is missing any data for gradient sync. So my questions are: 1. Here since splitting is happening before hand, how to make sure each GPU get’s a batch at each iteration to avoid gradient sync issues? 2. Do we need to use `DistributedSampler`? If yes, how? 3. in the docstrings of `split_dataset_by_node`, this is mentioned: *"If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples."* Can you explain the last part here? 4. If `dataset.n_shards % world_size != 0`, is it possible to shard the streaming dataset on the fly to avoid the case where data is missing? ### Motivation Somehow streaming datasets should work with DDP since for big LLMs a lot of data is required and DDP/multi-node is mostly used to train such models and streaming can actually help solve the data part of it. ### Your contribution Yes, I can help in submitting the PR once we get mutual understanding on how it should behave.
{ "avatar_url": "https://avatars.githubusercontent.com/u/30778939?v=4", "events_url": "https://api.github.com/users/rohitgr7/events{/privacy}", "followers_url": "https://api.github.com/users/rohitgr7/followers", "following_url": "https://api.github.com/users/rohitgr7/following{/other_user}", "gists_url": "https://api.github.com/users/rohitgr7/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rohitgr7", "id": 30778939, "login": "rohitgr7", "node_id": "MDQ6VXNlcjMwNzc4OTM5", "organizations_url": "https://api.github.com/users/rohitgr7/orgs", "received_events_url": "https://api.github.com/users/rohitgr7/received_events", "repos_url": "https://api.github.com/users/rohitgr7/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rohitgr7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohitgr7/subscriptions", "type": "User", "url": "https://api.github.com/users/rohitgr7" }
https://api.github.com/repos/huggingface/datasets/issues/6623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6623/timeline
open
false
6,623
null
null
null
false
2,103,780,697
https://api.github.com/repos/huggingface/datasets/issues/6622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6622/events
[]
null
2024-02-08T11:18:21Z
[]
https://github.com/huggingface/datasets/issues/6622
NONE
completed
null
null
[ "This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)" ]
multi-GPU map does not work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6622/reactions" }
I_kwDODunzps59ZSVZ
null
2024-01-27T20:06:08Z
https://api.github.com/repos/huggingface/datasets/issues/6622/comments
### Describe the bug Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-minute video than explain here): https://youtu.be/RNbdPkSppc4 ### Steps to reproduce the bug - ### Expected behavior - ### Environment info x2 RTX A4000
{ "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kopyl", "id": 17604849, "login": "kopyl", "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "organizations_url": "https://api.github.com/users/kopyl/orgs", "received_events_url": "https://api.github.com/users/kopyl/received_events", "repos_url": "https://api.github.com/users/kopyl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "type": "User", "url": "https://api.github.com/users/kopyl" }
https://api.github.com/repos/huggingface/datasets/issues/6622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6622/timeline
closed
false
6,622
null
2024-02-08T11:18:21Z
null
false
2,103,675,294
https://api.github.com/repos/huggingface/datasets/issues/6621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6621/events
[]
null
2024-01-27T17:14:43Z
[]
https://github.com/huggingface/datasets/issues/6621
NONE
completed
null
null
[]
deleted
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6621/reactions" }
I_kwDODunzps59Y4me
null
2024-01-27T16:59:58Z
https://api.github.com/repos/huggingface/datasets/issues/6621/comments
...
{ "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kopyl", "id": 17604849, "login": "kopyl", "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "organizations_url": "https://api.github.com/users/kopyl/orgs", "received_events_url": "https://api.github.com/users/kopyl/received_events", "repos_url": "https://api.github.com/users/kopyl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "type": "User", "url": "https://api.github.com/users/kopyl" }
https://api.github.com/repos/huggingface/datasets/issues/6621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6621/timeline
closed
false
6,621
null
2024-01-27T17:14:43Z
null
false
2,103,110,536
https://api.github.com/repos/huggingface/datasets/issues/6620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6620/events
[]
null
2024-02-06T09:40:19Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6620
NONE
not_planned
null
null
[ "Thanks for reporting, @kiehls90.\r\n\r\nAs this seems an issue with the specific \"wiki_dpr\" dataset, I am transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/wiki_dpr/discussions/13" ]
wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id}
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6620/reactions" }
I_kwDODunzps59WuuI
null
2024-01-27T01:00:09Z
https://api.github.com/repos/huggingface/datasets/issues/6620/comments
### Describe the bug I'm trying to run a rag example, and the dataset is wiki_dpr. wiki_dpr download and extracting have been completed successfully. However, at the generating train split stage, an error from wiki_dpr.py keeps popping up. Especially in "_generate_examples" : 1. The following error occurs in the line **id, text, title = line.strip().split("\t")** ValueError: not enough values ​​to unpack (expected 3, got 2) -> This part handles exceptions so that even if an error occurs, it passes. 2. **ID mismatch between lines {id} and vector {vec_id}** This error seems to occur at the line " assert int(id) == int(vec_id),". After I handled the exception in the split error, generating train split progressed to 80%, but an id mismatch error occurred at about the 16200000th vector id. Debugging is even more difficult because it takes a long time to download and split wiki_dpr. I need help. thank you in advance!! ### Steps to reproduce the bug Occurs in the generating train split step when running the rag example in the transformers repository. Specifically, it is an error in wiki_dpr.py. ### Expected behavior . ### Environment info python 3.8
{ "avatar_url": "https://avatars.githubusercontent.com/u/101498700?v=4", "events_url": "https://api.github.com/users/kiehls90/events{/privacy}", "followers_url": "https://api.github.com/users/kiehls90/followers", "following_url": "https://api.github.com/users/kiehls90/following{/other_user}", "gists_url": "https://api.github.com/users/kiehls90/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kiehls90", "id": 101498700, "login": "kiehls90", "node_id": "U_kgDOBgy_TA", "organizations_url": "https://api.github.com/users/kiehls90/orgs", "received_events_url": "https://api.github.com/users/kiehls90/received_events", "repos_url": "https://api.github.com/users/kiehls90/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kiehls90/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kiehls90/subscriptions", "type": "User", "url": "https://api.github.com/users/kiehls90" }
https://api.github.com/repos/huggingface/datasets/issues/6620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6620/timeline
closed
false
6,620
null
2024-02-06T09:40:19Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
2,102,407,478
https://api.github.com/repos/huggingface/datasets/issues/6619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6619/events
[]
null
2024-01-26T15:53:40Z
[]
https://github.com/huggingface/datasets/pull/6619
COLLABORATOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6619). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005066 / 0.011353 (-0.006287) | 0.003678 / 0.011008 (-0.007330) | 0.063057 / 0.038508 (0.024549) | 0.031250 / 0.023109 (0.008140) | 0.248856 / 0.275898 (-0.027042) | 0.266932 / 0.323480 (-0.056548) | 0.003814 / 0.007986 (-0.004172) | 0.002843 / 0.004328 (-0.001485) | 0.049210 / 0.004250 (0.044959) | 0.041514 / 0.037052 (0.004462) | 0.264874 / 0.258489 (0.006385) | 0.288834 / 0.293841 (-0.005007) | 0.027457 / 0.128546 (-0.101089) | 0.011071 / 0.075646 (-0.064575) | 0.206433 / 0.419271 (-0.212839) | 0.035381 / 0.043533 (-0.008152) | 0.246829 / 0.255139 (-0.008310) | 0.271094 / 0.283200 (-0.012106) | 0.017790 / 0.141683 (-0.123893) | 1.134618 / 1.452155 (-0.317536) | 1.182600 / 1.492716 (-0.310116) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094970 / 0.018006 (0.076964) | 0.306438 / 0.000490 (0.305949) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017786 / 0.037411 (-0.019625) | 0.060652 / 0.014526 (0.046127) | 0.072619 / 0.176557 (-0.103937) | 0.119460 / 0.737135 (-0.617676) | 0.073580 / 0.296338 (-0.222759) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279304 / 0.215209 (0.064095) | 2.747179 / 2.077655 (0.669524) | 1.438291 / 1.504120 (-0.065829) | 1.313405 / 1.541195 (-0.227789) | 1.354569 / 1.468490 (-0.113921) | 0.578375 / 4.584777 (-4.006402) | 2.424576 / 3.745712 (-1.321136) | 2.831513 / 5.269862 (-2.438348) | 1.756062 / 4.565676 (-2.809614) | 0.064460 / 0.424275 (-0.359815) | 0.005065 / 0.007607 (-0.002542) | 0.335003 / 0.226044 (0.108958) | 3.310500 / 2.268929 (1.041571) | 1.778017 / 55.444624 (-53.666607) | 1.504743 / 6.876477 (-5.371734) | 1.532843 / 2.142072 (-0.609229) | 0.662110 / 4.805227 (-4.143118) | 0.118239 / 6.500664 (-6.382425) | 0.042135 / 0.075469 (-0.033335) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945650 / 1.841788 (-0.896137) | 11.623179 / 8.074308 (3.548871) | 10.927315 / 10.191392 (0.735923) | 0.131050 / 0.680424 (-0.549374) | 0.014725 / 0.534201 (-0.519476) | 0.290716 / 0.579283 (-0.288567) | 0.272357 / 0.434364 (-0.162007) | 0.323274 / 0.540337 (-0.217064) | 0.426692 / 1.386936 (-0.960244) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005478 / 0.011353 (-0.005875) | 0.003618 / 0.011008 (-0.007390) | 0.049599 / 0.038508 (0.011091) | 0.030814 / 0.023109 (0.007705) | 0.273663 / 0.275898 (-0.002235) | 0.292099 / 0.323480 (-0.031381) | 0.004196 / 0.007986 (-0.003790) | 0.002779 / 0.004328 (-0.001550) | 0.047812 / 0.004250 (0.043562) | 0.045095 / 0.037052 (0.008043) | 0.286288 / 0.258489 (0.027799) | 0.314125 / 0.293841 (0.020284) | 0.047940 / 0.128546 (-0.080606) | 0.010714 / 0.075646 (-0.064932) | 0.057453 / 0.419271 (-0.361819) | 0.033482 / 0.043533 (-0.010051) | 0.273391 / 0.255139 (0.018252) | 0.284936 / 0.283200 (0.001736) | 0.017805 / 0.141683 (-0.123878) | 1.148303 / 1.452155 (-0.303852) | 1.185268 / 1.492716 (-0.307448) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092442 / 0.018006 (0.074436) | 0.309908 / 0.000490 (0.309418) | 0.000213 / 0.000200 (0.000013) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022874 / 0.037411 (-0.014537) | 0.078238 / 0.014526 (0.063712) | 0.088844 / 0.176557 (-0.087713) | 0.127054 / 0.737135 (-0.610081) | 0.089809 / 0.296338 (-0.206530) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292360 / 0.215209 (0.077151) | 2.842700 / 2.077655 (0.765045) | 1.571071 / 1.504120 (0.066951) | 1.450773 / 1.541195 (-0.090422) | 1.467090 / 1.468490 (-0.001400) | 0.583529 / 4.584777 (-4.001248) | 2.469284 / 3.745712 (-1.276428) | 2.844426 / 5.269862 (-2.425435) | 1.773336 / 4.565676 (-2.792341) | 0.064585 / 0.424275 (-0.359690) | 0.005098 / 0.007607 (-0.002509) | 0.342816 / 0.226044 (0.116771) | 3.363309 / 2.268929 (1.094381) | 1.922834 / 55.444624 (-53.521790) | 1.649702 / 6.876477 (-5.226774) | 1.672727 / 2.142072 (-0.469345) | 0.665015 / 4.805227 (-4.140212) | 0.124764 / 6.500664 (-6.375900) | 0.041564 / 0.075469 (-0.033905) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.988970 / 1.841788 (-0.852818) | 12.148983 / 8.074308 (4.074675) | 11.132697 / 10.191392 (0.941305) | 0.131596 / 0.680424 (-0.548828) | 0.015700 / 0.534201 (-0.518501) | 0.288819 / 0.579283 (-0.290464) | 0.276692 / 0.434364 (-0.157672) | 0.330260 / 0.540337 (-0.210078) | 0.421612 / 1.386936 (-0.965324) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d627fb8357f39d78d79e704712609c7b34bdeba4 \"CML watermark\")\n" ]
Migrate from `setup.cfg` to `pyproject.toml`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6619/reactions" }
PR_kwDODunzps5lK2VY
{ "diff_url": "https://github.com/huggingface/datasets/pull/6619.diff", "html_url": "https://github.com/huggingface/datasets/pull/6619", "merged_at": "2024-01-26T15:47:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/6619.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6619" }
2024-01-26T15:27:10Z
https://api.github.com/repos/huggingface/datasets/issues/6619/comments
Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6619/timeline
closed
false
6,619
null
2024-01-26T15:47:32Z
null
true
2,101,868,198
https://api.github.com/repos/huggingface/datasets/issues/6618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6618/events
[]
null
2024-07-23T09:31:07Z
[]
https://github.com/huggingface/datasets/issues/6618
NONE
not_planned
null
null
[ "Hi! Can you please share the error's stack trace so we can see where it comes from?", "We cannot reproduce the issue and we do not have enough information: environment info (need to run `datasets-cli env`), stack trace,...\r\n\r\nI am closing the issue. Feel free to reopen it (with additional information) if the problem persists.", "Yeah 👍\r\n\r\nOn Tue, 6 Feb 2024 at 2:56 PM, Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> We cannot reproduce the issue and we do not have enough information:\r\n> environment info (need to run datasets-cli env), stack trace,...\r\n>\r\n> I am closing the issue. Feel free to reopen it (with additional\r\n> information) if the problem persists.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/6618#issuecomment-1929102334>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/ASS4PJ3XOIIWISPY3VX3QRTYSHZK5AVCNFSM6AAAAABCL3BT4SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMRZGEYDEMZTGQ>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Please downgrade the version of urllib3 if you have the same issue:\r\n\r\n!pip install urllib3==1.25.11", "> Please downgrade the version of urllib3 if you have the same issue:\r\n> \r\n> !pip install urllib3==1.25.11\r\n\r\nThis worked for me. Thanks.\r\n\r\nI use python 3.11 and datasets==2.20.0. Downgrading urllib3 to 1.25.11 worked in my case." ]
While importing load_dataset from datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6618/reactions" }
I_kwDODunzps59R_am
null
2024-01-26T09:21:57Z
https://api.github.com/repos/huggingface/datasets/issues/6618/comments
### Describe the bug cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior No errors ### Environment info python 3.11.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/77973415?v=4", "events_url": "https://api.github.com/users/Era-cell/events{/privacy}", "followers_url": "https://api.github.com/users/Era-cell/followers", "following_url": "https://api.github.com/users/Era-cell/following{/other_user}", "gists_url": "https://api.github.com/users/Era-cell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Era-cell", "id": 77973415, "login": "Era-cell", "node_id": "MDQ6VXNlcjc3OTczNDE1", "organizations_url": "https://api.github.com/users/Era-cell/orgs", "received_events_url": "https://api.github.com/users/Era-cell/received_events", "repos_url": "https://api.github.com/users/Era-cell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Era-cell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Era-cell/subscriptions", "type": "User", "url": "https://api.github.com/users/Era-cell" }
https://api.github.com/repos/huggingface/datasets/issues/6618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6618/timeline
closed
false
6,618
null
2024-02-06T09:25:54Z
null
false
2,100,459,449
https://api.github.com/repos/huggingface/datasets/issues/6617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6617/events
[]
null
2024-01-26T14:56:46Z
[]
https://github.com/huggingface/datasets/pull/6617
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6617). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004774 / 0.011353 (-0.006579) | 0.003397 / 0.011008 (-0.007611) | 0.063862 / 0.038508 (0.025354) | 0.029353 / 0.023109 (0.006244) | 0.245921 / 0.275898 (-0.029977) | 0.268414 / 0.323480 (-0.055066) | 0.002834 / 0.007986 (-0.005152) | 0.002606 / 0.004328 (-0.001723) | 0.049690 / 0.004250 (0.045439) | 0.041637 / 0.037052 (0.004585) | 0.262526 / 0.258489 (0.004037) | 0.288200 / 0.293841 (-0.005641) | 0.027233 / 0.128546 (-0.101313) | 0.010322 / 0.075646 (-0.065324) | 0.213860 / 0.419271 (-0.205411) | 0.034930 / 0.043533 (-0.008602) | 0.249256 / 0.255139 (-0.005883) | 0.270016 / 0.283200 (-0.013184) | 0.019413 / 0.141683 (-0.122270) | 1.124801 / 1.452155 (-0.327354) | 1.166224 / 1.492716 (-0.326492) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091641 / 0.018006 (0.073635) | 0.299679 / 0.000490 (0.299189) | 0.000209 / 0.000200 (0.000009) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018084 / 0.037411 (-0.019327) | 0.060143 / 0.014526 (0.045617) | 0.072556 / 0.176557 (-0.104001) | 0.118555 / 0.737135 (-0.618580) | 0.073786 / 0.296338 (-0.222553) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278193 / 0.215209 (0.062984) | 2.707954 / 2.077655 (0.630300) | 1.483575 / 1.504120 (-0.020545) | 1.371939 / 1.541195 (-0.169256) | 1.395009 / 1.468490 (-0.073481) | 0.559949 / 4.584777 (-4.024828) | 2.372529 / 3.745712 (-1.373183) | 2.823641 / 5.269862 (-2.446221) | 1.722999 / 4.565676 (-2.842678) | 0.062535 / 0.424275 (-0.361741) | 0.004970 / 0.007607 (-0.002637) | 0.338625 / 0.226044 (0.112580) | 3.317576 / 2.268929 (1.048648) | 1.854552 / 55.444624 (-53.590073) | 1.589323 / 6.876477 (-5.287154) | 1.624630 / 2.142072 (-0.517442) | 0.638388 / 4.805227 (-4.166839) | 0.116675 / 6.500664 (-6.383989) | 0.041850 / 0.075469 (-0.033619) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.938025 / 1.841788 (-0.903763) | 11.450072 / 8.074308 (3.375764) | 10.414943 / 10.191392 (0.223551) | 0.128416 / 0.680424 (-0.552007) | 0.013798 / 0.534201 (-0.520403) | 0.287997 / 0.579283 (-0.291286) | 0.259976 / 0.434364 (-0.174387) | 0.320737 / 0.540337 (-0.219601) | 0.424292 / 1.386936 (-0.962644) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005107 / 0.011353 (-0.006246) | 0.003374 / 0.011008 (-0.007634) | 0.050067 / 0.038508 (0.011559) | 0.031419 / 0.023109 (0.008310) | 0.275303 / 0.275898 (-0.000595) | 0.286736 / 0.323480 (-0.036744) | 0.004177 / 0.007986 (-0.003808) | 0.002742 / 0.004328 (-0.001586) | 0.049011 / 0.004250 (0.044761) | 0.044373 / 0.037052 (0.007321) | 0.289189 / 0.258489 (0.030700) | 0.320117 / 0.293841 (0.026276) | 0.050154 / 0.128546 (-0.078392) | 0.010541 / 0.075646 (-0.065106) | 0.058318 / 0.419271 (-0.360954) | 0.033090 / 0.043533 (-0.010443) | 0.276820 / 0.255139 (0.021681) | 0.290854 / 0.283200 (0.007654) | 0.017268 / 0.141683 (-0.124415) | 1.159345 / 1.452155 (-0.292809) | 1.224829 / 1.492716 (-0.267887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092468 / 0.018006 (0.074462) | 0.301176 / 0.000490 (0.300686) | 0.000216 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021858 / 0.037411 (-0.015553) | 0.074873 / 0.014526 (0.060347) | 0.086238 / 0.176557 (-0.090318) | 0.125555 / 0.737135 (-0.611580) | 0.087791 / 0.296338 (-0.208547) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292283 / 0.215209 (0.077073) | 2.847306 / 2.077655 (0.769651) | 1.600833 / 1.504120 (0.096713) | 1.474253 / 1.541195 (-0.066942) | 1.474871 / 1.468490 (0.006381) | 0.576427 / 4.584777 (-4.008350) | 2.380116 / 3.745712 (-1.365596) | 2.782059 / 5.269862 (-2.487803) | 1.730642 / 4.565676 (-2.835035) | 0.063860 / 0.424275 (-0.360415) | 0.005019 / 0.007607 (-0.002588) | 0.343247 / 0.226044 (0.117202) | 3.393427 / 2.268929 (1.124498) | 1.935346 / 55.444624 (-53.509278) | 1.680124 / 6.876477 (-5.196353) | 1.665788 / 2.142072 (-0.476285) | 0.648767 / 4.805227 (-4.156460) | 0.121962 / 6.500664 (-6.378702) | 0.040669 / 0.075469 (-0.034800) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.996535 / 1.841788 (-0.845252) | 12.074553 / 8.074308 (4.000245) | 10.812740 / 10.191392 (0.621348) | 0.142690 / 0.680424 (-0.537734) | 0.014977 / 0.534201 (-0.519224) | 0.285619 / 0.579283 (-0.293664) | 0.269401 / 0.434364 (-0.164963) | 0.329882 / 0.540337 (-0.210456) | 0.416169 / 1.386936 (-0.970767) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#129b9e0565e7a2ceaca64b99dcbf39504661cfa9 \"CML watermark\")\n" ]
Fix CI: pyarrow 15, pandas 2.2 and sqlachemy
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6617/reactions" }
PR_kwDODunzps5lEagV
{ "diff_url": "https://github.com/huggingface/datasets/pull/6617.diff", "html_url": "https://github.com/huggingface/datasets/pull/6617", "merged_at": "2024-01-26T14:50:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/6617.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6617" }
2024-01-25T13:57:41Z
https://api.github.com/repos/huggingface/datasets/issues/6617/comments
this should fix the CI failures on `main` close https://github.com/huggingface/datasets/issues/5477
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6617/timeline
closed
false
6,617
null
2024-01-26T14:50:44Z
null
true
2,100,125,709
https://api.github.com/repos/huggingface/datasets/issues/6616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6616/events
[]
null
2024-01-26T16:25:24Z
[]
https://github.com/huggingface/datasets/pull/6616
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005382 / 0.011353 (-0.005970) | 0.003853 / 0.011008 (-0.007155) | 0.062629 / 0.038508 (0.024121) | 0.030344 / 0.023109 (0.007234) | 0.245394 / 0.275898 (-0.030505) | 0.266004 / 0.323480 (-0.057476) | 0.003183 / 0.007986 (-0.004802) | 0.002795 / 0.004328 (-0.001533) | 0.048357 / 0.004250 (0.044107) | 0.043834 / 0.037052 (0.006782) | 0.255979 / 0.258489 (-0.002510) | 0.280803 / 0.293841 (-0.013038) | 0.028200 / 0.128546 (-0.100347) | 0.010856 / 0.075646 (-0.064791) | 0.207076 / 0.419271 (-0.212195) | 0.036286 / 0.043533 (-0.007247) | 0.246492 / 0.255139 (-0.008647) | 0.265861 / 0.283200 (-0.017338) | 0.018309 / 0.141683 (-0.123374) | 1.155136 / 1.452155 (-0.297018) | 1.214342 / 1.492716 (-0.278375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092530 / 0.018006 (0.074524) | 0.344951 / 0.000490 (0.344461) | 0.000207 / 0.000200 (0.000007) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018324 / 0.037411 (-0.019087) | 0.063137 / 0.014526 (0.048611) | 0.074683 / 0.176557 (-0.101874) | 0.120224 / 0.737135 (-0.616912) | 0.083107 / 0.296338 (-0.213232) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288631 / 0.215209 (0.073422) | 2.817992 / 2.077655 (0.740337) | 1.473609 / 1.504120 (-0.030511) | 1.336610 / 1.541195 (-0.204585) | 1.354807 / 1.468490 (-0.113683) | 0.568776 / 4.584777 (-4.016001) | 2.412607 / 3.745712 (-1.333105) | 2.832816 / 5.269862 (-2.437045) | 1.789899 / 4.565676 (-2.775778) | 0.063602 / 0.424275 (-0.360673) | 0.004993 / 0.007607 (-0.002615) | 0.338830 / 0.226044 (0.112786) | 3.302550 / 2.268929 (1.033621) | 1.827907 / 55.444624 (-53.616717) | 1.589857 / 6.876477 (-5.286620) | 1.647746 / 2.142072 (-0.494326) | 0.658461 / 4.805227 (-4.146766) | 0.120360 / 6.500664 (-6.380304) | 0.042989 / 0.075469 (-0.032480) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.945487 / 1.841788 (-0.896301) | 11.846335 / 8.074308 (3.772027) | 10.483199 / 10.191392 (0.291807) | 0.131853 / 0.680424 (-0.548570) | 0.014230 / 0.534201 (-0.519971) | 0.288700 / 0.579283 (-0.290584) | 0.276086 / 0.434364 (-0.158278) | 0.326225 / 0.540337 (-0.214112) | 0.422874 / 1.386936 (-0.964062) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006234 / 0.011353 (-0.005118) | 0.004104 / 0.011008 (-0.006904) | 0.049967 / 0.038508 (0.011459) | 0.037157 / 0.023109 (0.014048) | 0.261892 / 0.275898 (-0.014006) | 0.284304 / 0.323480 (-0.039176) | 0.004482 / 0.007986 (-0.003504) | 0.002920 / 0.004328 (-0.001409) | 0.048827 / 0.004250 (0.044577) | 0.052258 / 0.037052 (0.015206) | 0.277121 / 0.258489 (0.018632) | 0.304177 / 0.293841 (0.010336) | 0.053537 / 0.128546 (-0.075009) | 0.011137 / 0.075646 (-0.064509) | 0.058188 / 0.419271 (-0.361083) | 0.034283 / 0.043533 (-0.009250) | 0.261912 / 0.255139 (0.006773) | 0.273851 / 0.283200 (-0.009348) | 0.017824 / 0.141683 (-0.123859) | 1.130454 / 1.452155 (-0.321701) | 1.176834 / 1.492716 (-0.315882) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102104 / 0.018006 (0.084098) | 0.302873 / 0.000490 (0.302383) | 0.000208 / 0.000200 (0.000008) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022470 / 0.037411 (-0.014941) | 0.076776 / 0.014526 (0.062250) | 0.088220 / 0.176557 (-0.088337) | 0.130030 / 0.737135 (-0.607105) | 0.089955 / 0.296338 (-0.206383) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284070 / 0.215209 (0.068861) | 2.769130 / 2.077655 (0.691475) | 1.546379 / 1.504120 (0.042259) | 1.435849 / 1.541195 (-0.105346) | 1.478616 / 1.468490 (0.010126) | 0.569185 / 4.584777 (-4.015592) | 2.504721 / 3.745712 (-1.240992) | 2.778267 / 5.269862 (-2.491595) | 1.860360 / 4.565676 (-2.705316) | 0.073465 / 0.424275 (-0.350810) | 0.005108 / 0.007607 (-0.002499) | 0.335185 / 0.226044 (0.109140) | 3.314799 / 2.268929 (1.045870) | 1.934824 / 55.444624 (-53.509801) | 1.656247 / 6.876477 (-5.220229) | 1.785422 / 2.142072 (-0.356650) | 0.673677 / 4.805227 (-4.131551) | 0.117692 / 6.500664 (-6.382972) | 0.041648 / 0.075469 (-0.033821) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.972143 / 1.841788 (-0.869645) | 12.980353 / 8.074308 (4.906045) | 11.056189 / 10.191392 (0.864797) | 0.134592 / 0.680424 (-0.545832) | 0.015972 / 0.534201 (-0.518229) | 0.301691 / 0.579283 (-0.277593) | 0.286332 / 0.434364 (-0.148032) | 0.329025 / 0.540337 (-0.211312) | 0.422585 / 1.386936 (-0.964351) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6eb492c7072f21cb417801957c087888f252d2d1 \"CML watermark\")\n" ]
Use schema metadata only if it matches features
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6616/reactions" }
PR_kwDODunzps5lDSEL
{ "diff_url": "https://github.com/huggingface/datasets/pull/6616.diff", "html_url": "https://github.com/huggingface/datasets/pull/6616", "merged_at": "2024-01-26T16:19:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6616.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6616" }
2024-01-25T11:01:14Z
https://api.github.com/repos/huggingface/datasets/issues/6616/comments
e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/6616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6616/timeline
closed
false
6,616
null
2024-01-26T16:19:12Z
null
true
2,098,951,409
https://api.github.com/repos/huggingface/datasets/issues/6615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6615/events
[]
null
2024-01-24T19:42:30Z
[]
https://github.com/huggingface/datasets/issues/6615
NONE
not_planned
null
null
[ "Sorry I posted in the wrong repo, please delete.. thanks!" ]
...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6615/reactions" }
I_kwDODunzps59G3Tx
null
2024-01-24T19:37:03Z
https://api.github.com/repos/huggingface/datasets/issues/6615/comments
...
{ "avatar_url": "https://avatars.githubusercontent.com/u/22179777?v=4", "events_url": "https://api.github.com/users/ftkeys/events{/privacy}", "followers_url": "https://api.github.com/users/ftkeys/followers", "following_url": "https://api.github.com/users/ftkeys/following{/other_user}", "gists_url": "https://api.github.com/users/ftkeys/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ftkeys", "id": 22179777, "login": "ftkeys", "node_id": "MDQ6VXNlcjIyMTc5Nzc3", "organizations_url": "https://api.github.com/users/ftkeys/orgs", "received_events_url": "https://api.github.com/users/ftkeys/received_events", "repos_url": "https://api.github.com/users/ftkeys/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ftkeys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ftkeys/subscriptions", "type": "User", "url": "https://api.github.com/users/ftkeys" }
https://api.github.com/repos/huggingface/datasets/issues/6615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6615/timeline
closed
false
6,615
null
2024-01-24T19:40:11Z
null
false
2,098,884,520
https://api.github.com/repos/huggingface/datasets/issues/6614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6614/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-01-24T18:55:09Z
[]
https://github.com/huggingface/datasets/issues/6614
CONTRIBUTOR
null
null
null
[]
`datasets/downloads` cleanup tool
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6614/reactions" }
I_kwDODunzps59Gm-o
null
2024-01-24T18:52:10Z
https://api.github.com/repos/huggingface/datasets/issues/6614/comments
### Feature request Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do: ``` sudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \+ sudo find /data/huggingface/datasets/downloads -type d -empty -delete ``` could the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space e.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO. Also I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not. Thank you @Wauplin (requested to be tagged)
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://api.github.com/repos/huggingface/datasets/issues/6614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6614/timeline
open
false
6,614
null
null
null
false
2,098,078,210
https://api.github.com/repos/huggingface/datasets/issues/6612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6612/events
[]
null
2024-02-01T08:14:50Z
[]
https://github.com/huggingface/datasets/issues/6612
NONE
completed
null
null
[ "Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou can update `datasets` with\r\n\r\n```\r\npip install -U datasets\r\n```" ]
cnn_dailymail repeats itself
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6612/reactions" }
I_kwDODunzps59DiIC
null
2024-01-24T11:38:25Z
https://api.github.com/repos/huggingface/datasets/issues/6612/comments
### Describe the bug When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be. Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339. Also I checked data: ``` >>> ds['train']['highlights'][0] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."```` >>> ds['train']['highlights'][0] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday . Young actor says he has no plans to fritter his cash away . Radcliffe's earnings from first five Potter films have been held in trust fund ."```` >>> ds['train']['highlights'][287113] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ."```` >>> ds['train']['highlights'][574226] "Harry Potter star Daniel Radcliffe gets £20M fortune as he turns 18 Monday .\nYoung actor says he has no plans to fritter his cash away .\nRadcliffe's earnings from first five Potter films have been held in trust fund ." ``` The datasets seems to be updated 6 days ago to convert it to Parquet. Probably, there is some issue with backward compatability. ### Steps to reproduce the bug 1. ``` from datasets import load_dataset ds = load_dataset('cnn_dailymail', '3.0.0') len(ds['train']) ``` ### Expected behavior It should not repeat itself. ### Environment info datasets==2.13.2 Python==3.7.13
{ "avatar_url": "https://avatars.githubusercontent.com/u/8274752?v=4", "events_url": "https://api.github.com/users/KeremZaman/events{/privacy}", "followers_url": "https://api.github.com/users/KeremZaman/followers", "following_url": "https://api.github.com/users/KeremZaman/following{/other_user}", "gists_url": "https://api.github.com/users/KeremZaman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/KeremZaman", "id": 8274752, "login": "KeremZaman", "node_id": "MDQ6VXNlcjgyNzQ3NTI=", "organizations_url": "https://api.github.com/users/KeremZaman/orgs", "received_events_url": "https://api.github.com/users/KeremZaman/received_events", "repos_url": "https://api.github.com/users/KeremZaman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/KeremZaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KeremZaman/subscriptions", "type": "User", "url": "https://api.github.com/users/KeremZaman" }
https://api.github.com/repos/huggingface/datasets/issues/6612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6612/timeline
closed
false
6,612
null
2024-02-01T08:14:50Z
null
false
2,096,004,858
https://api.github.com/repos/huggingface/datasets/issues/6611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6611/events
[]
null
2024-01-23T12:37:57Z
[]
https://github.com/huggingface/datasets/issues/6611
NONE
null
null
null
[]
`load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6611/reactions" }
I_kwDODunzps587n76
null
2024-01-23T12:37:57Z
https://api.github.com/repos/huggingface/datasets/issues/6611/comments
### Describe the bug When loading a large dataset (>1000GB) from S3 I run into the following error: ``` Traceback (most recent call last): File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper return await func(*args, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module> dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options) File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download return self.get(rpath, lpath, recursive=recursive, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync raise return_result File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner result[0] = await coro File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get return await _run_coros_in_chunks( File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks await asyncio.gather(*chunk, return_exceptions=return_exceptions), File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for return await fut File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file body, content_length = await _open_file(range=0) File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file resp = await self._call_s3( File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3 return await _error_wrapper( File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper raise err PermissionError: The difference between the request time and the current time is too large. ``` The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here: - https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la - https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path. ### Steps to reproduce the bug 1. Create large dataset 2. Try loading it from s3 using: ``` dataset = load_from_disk("s3://...", storage_options=storage_options) ``` ### Expected behavior Load dataset without running into this error. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.3 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/15320635?v=4", "events_url": "https://api.github.com/users/zotroneneis/events{/privacy}", "followers_url": "https://api.github.com/users/zotroneneis/followers", "following_url": "https://api.github.com/users/zotroneneis/following{/other_user}", "gists_url": "https://api.github.com/users/zotroneneis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zotroneneis", "id": 15320635, "login": "zotroneneis", "node_id": "MDQ6VXNlcjE1MzIwNjM1", "organizations_url": "https://api.github.com/users/zotroneneis/orgs", "received_events_url": "https://api.github.com/users/zotroneneis/received_events", "repos_url": "https://api.github.com/users/zotroneneis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zotroneneis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zotroneneis/subscriptions", "type": "User", "url": "https://api.github.com/users/zotroneneis" }
https://api.github.com/repos/huggingface/datasets/issues/6611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6611/timeline
open
false
6,611
null
null
null
false
2,095,643,711
https://api.github.com/repos/huggingface/datasets/issues/6610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6610/events
[]
null
2024-01-25T02:15:23Z
[]
https://github.com/huggingface/datasets/issues/6610
NONE
completed
null
null
[ "Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n```python\r\nais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n```", "> Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n> \r\n> ```python\r\n> ais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n> ```\r\n\r\nthanks" ]
cast_column to Sequence(subfeatures_dict) has err
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6610/reactions" }
I_kwDODunzps586Pw_
null
2024-01-23T09:32:32Z
https://api.github.com/repos/huggingface/datasets/issues/6610/comments
### Describe the bug I am working with the following demo code: ``` from datasets import load_dataset from datasets.features import Sequence, Value, ClassLabel, Features ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/") ais_dataset = ais_dataset["train"] def add_class(example): example["my_labeled_bbox"] = {"bbox": [100,100,200,200], "label": "cat"} return example ais_dataset = ais_dataset.map(add_class, batched=False, num_proc=32) ais_dataset = ais_dataset.cast_column("my_labeled_bbox", Sequence( { "bbox": Sequence(Value(dtype="int64")), "label": ClassLabel(names=["cat", "dog"]) })) print(ais_dataset[0]) ``` However, executing this code results in an error: ``` File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2111, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type int64 to Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None) ``` Upon examining the source code in datasets/table.py at line 2035: ``` if isinstance(feature, Sequence) and isinstance(feature.feature, dict): feature = { name: Sequence(subfeature, length=feature.length) for name, subfeature in feature.feature.items() } ``` I noticed that if subfeature is of type Sequence, the code results in Sequence(Sequence(...), ...) and Sequence(ClassLabel(...), ...), which appears to be the source of the error. ### Steps to reproduce the bug run my demo code ### Expected behavior no exception ### Environment info python 3.9 datasets: 2.16.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4", "events_url": "https://api.github.com/users/neiblegy/events{/privacy}", "followers_url": "https://api.github.com/users/neiblegy/followers", "following_url": "https://api.github.com/users/neiblegy/following{/other_user}", "gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neiblegy", "id": 16574677, "login": "neiblegy", "node_id": "MDQ6VXNlcjE2NTc0Njc3", "organizations_url": "https://api.github.com/users/neiblegy/orgs", "received_events_url": "https://api.github.com/users/neiblegy/received_events", "repos_url": "https://api.github.com/users/neiblegy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions", "type": "User", "url": "https://api.github.com/users/neiblegy" }
https://api.github.com/repos/huggingface/datasets/issues/6610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6610/timeline
closed
false
6,610
null
2024-01-25T02:15:23Z
null
false
2,095,085,650
https://api.github.com/repos/huggingface/datasets/issues/6609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6609/events
[]
null
2024-02-06T17:21:25Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/6609
NONE
completed
null
null
[ "+1", "same error in 2.16.1", "@kongjiellx any luck with the issue?", "I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets`", "Thanks @lhoestq !" ]
Wrong path for cache directory in offline mode
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6609/reactions" }
I_kwDODunzps584HhS
null
2024-01-23T01:47:19Z
https://api.github.com/repos/huggingface/datasets/issues/6609/comments
### Describe the bug Dear huggingfacers, I'm trying to use a subset of the-stack dataset. When I run the command the first time ``` dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' ) ``` It downloads the files and caches them normally. Nevertheless, since my compute nodes are not online (`HF_DATASETS_OFFLINE=1`) . Whenever I try to run the command again, the library is passing the wrong cache path: `Cache directory for the-stack doesn't exist at /Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` when the right path is: `'/Users/user/.cache/huggingface/datasets/bigcode___the-stack/default-data_dir=data\%2Ffortran` Not sure why those redundancies are included in the path. If I try adding the correct path through the the cache_dir argument it throws an error: ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'bigcode/the-stack': Offline mode is enabled. Your help with this issue is greatly appreciated. Thanks a lot for the great work. ### Steps to reproduce the bug 1: `dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' )` 2: `HF_DATASETS_OFFLINE=1` 3: `dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' )` ### Expected behavior being able to use the cached data ### Environment info several different systems
{ "avatar_url": "https://avatars.githubusercontent.com/u/42117435?v=4", "events_url": "https://api.github.com/users/je-santos/events{/privacy}", "followers_url": "https://api.github.com/users/je-santos/followers", "following_url": "https://api.github.com/users/je-santos/following{/other_user}", "gists_url": "https://api.github.com/users/je-santos/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/je-santos", "id": 42117435, "login": "je-santos", "node_id": "MDQ6VXNlcjQyMTE3NDM1", "organizations_url": "https://api.github.com/users/je-santos/orgs", "received_events_url": "https://api.github.com/users/je-santos/received_events", "repos_url": "https://api.github.com/users/je-santos/repos", "site_admin": false, "starred_url": "https://api.github.com/users/je-santos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/je-santos/subscriptions", "type": "User", "url": "https://api.github.com/users/je-santos" }
https://api.github.com/repos/huggingface/datasets/issues/6609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6609/timeline
closed
false
6,609
null
2024-02-06T17:21:25Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
2,094,153,292
https://api.github.com/repos/huggingface/datasets/issues/6608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6608/events
[]
null
2024-01-29T16:43:11Z
[]
https://github.com/huggingface/datasets/pull/6608
COLLABORATOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005376 / 0.011353 (-0.005977) | 0.004691 / 0.011008 (-0.006317) | 0.064061 / 0.038508 (0.025553) | 0.030397 / 0.023109 (0.007288) | 0.242656 / 0.275898 (-0.033242) | 0.275586 / 0.323480 (-0.047894) | 0.003460 / 0.007986 (-0.004526) | 0.003125 / 0.004328 (-0.001203) | 0.050496 / 0.004250 (0.046246) | 0.045833 / 0.037052 (0.008781) | 0.255222 / 0.258489 (-0.003267) | 0.287303 / 0.293841 (-0.006538) | 0.027755 / 0.128546 (-0.100791) | 0.011251 / 0.075646 (-0.064396) | 0.208456 / 0.419271 (-0.210816) | 0.037219 / 0.043533 (-0.006314) | 0.249592 / 0.255139 (-0.005547) | 0.261243 / 0.283200 (-0.021957) | 0.020735 / 0.141683 (-0.120948) | 1.130017 / 1.452155 (-0.322137) | 1.208558 / 1.492716 (-0.284158) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098891 / 0.018006 (0.080885) | 0.439042 / 0.000490 (0.438552) | 0.000333 / 0.000200 (0.000133) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018356 / 0.037411 (-0.019055) | 0.062416 / 0.014526 (0.047891) | 0.075613 / 0.176557 (-0.100944) | 0.122009 / 0.737135 (-0.615126) | 0.078195 / 0.296338 (-0.218144) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.273804 / 0.215209 (0.058595) | 2.706480 / 2.077655 (0.628826) | 1.456196 / 1.504120 (-0.047924) | 1.353301 / 1.541195 (-0.187893) | 1.378913 / 1.468490 (-0.089577) | 0.556885 / 4.584777 (-4.027892) | 2.358961 / 3.745712 (-1.386752) | 2.871830 / 5.269862 (-2.398031) | 1.765212 / 4.565676 (-2.800464) | 0.062172 / 0.424275 (-0.362103) | 0.004974 / 0.007607 (-0.002633) | 0.330375 / 0.226044 (0.104331) | 3.264550 / 2.268929 (0.995621) | 1.824444 / 55.444624 (-53.620181) | 1.561189 / 6.876477 (-5.315287) | 1.671020 / 2.142072 (-0.471052) | 0.633408 / 4.805227 (-4.171819) | 0.116080 / 6.500664 (-6.384584) | 0.044606 / 0.075469 (-0.030863) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.980757 / 1.841788 (-0.861031) | 12.553534 / 8.074308 (4.479225) | 10.517668 / 10.191392 (0.326276) | 0.130528 / 0.680424 (-0.549896) | 0.013960 / 0.534201 (-0.520241) | 0.289615 / 0.579283 (-0.289668) | 0.267277 / 0.434364 (-0.167087) | 0.324139 / 0.540337 (-0.216198) | 0.440325 / 1.386936 (-0.946611) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005388 / 0.011353 (-0.005965) | 0.004043 / 0.011008 (-0.006966) | 0.050514 / 0.038508 (0.012005) | 0.031413 / 0.023109 (0.008303) | 0.275122 / 0.275898 (-0.000776) | 0.307518 / 0.323480 (-0.015962) | 0.004440 / 0.007986 (-0.003546) | 0.003301 / 0.004328 (-0.001027) | 0.049200 / 0.004250 (0.044949) | 0.045704 / 0.037052 (0.008651) | 0.285265 / 0.258489 (0.026776) | 0.318942 / 0.293841 (0.025101) | 0.053893 / 0.128546 (-0.074653) | 0.011855 / 0.075646 (-0.063791) | 0.060951 / 0.419271 (-0.358321) | 0.034397 / 0.043533 (-0.009136) | 0.276108 / 0.255139 (0.020969) | 0.290981 / 0.283200 (0.007781) | 0.019986 / 0.141683 (-0.121697) | 1.205695 / 1.452155 (-0.246460) | 1.255942 / 1.492716 (-0.236774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101910 / 0.018006 (0.083904) | 0.320551 / 0.000490 (0.320061) | 0.000299 / 0.000200 (0.000099) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022387 / 0.037411 (-0.015024) | 0.076380 / 0.014526 (0.061854) | 0.090404 / 0.176557 (-0.086153) | 0.127106 / 0.737135 (-0.610030) | 0.089873 / 0.296338 (-0.206465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288433 / 0.215209 (0.073223) | 2.827005 / 2.077655 (0.749350) | 1.548760 / 1.504120 (0.044640) | 1.419545 / 1.541195 (-0.121650) | 1.456531 / 1.468490 (-0.011959) | 0.570254 / 4.584777 (-4.014523) | 2.441318 / 3.745712 (-1.304394) | 2.778647 / 5.269862 (-2.491215) | 1.755255 / 4.565676 (-2.810422) | 0.062581 / 0.424275 (-0.361694) | 0.005205 / 0.007607 (-0.002402) | 0.342189 / 0.226044 (0.116145) | 3.401208 / 2.268929 (1.132279) | 1.941447 / 55.444624 (-53.503178) | 1.652578 / 6.876477 (-5.223899) | 1.768558 / 2.142072 (-0.373514) | 0.656537 / 4.805227 (-4.148690) | 0.116901 / 6.500664 (-6.383763) | 0.041408 / 0.075469 (-0.034061) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001715 / 1.841788 (-0.840073) | 12.533073 / 8.074308 (4.458765) | 11.086084 / 10.191392 (0.894692) | 0.134368 / 0.680424 (-0.546055) | 0.015255 / 0.534201 (-0.518946) | 0.291769 / 0.579283 (-0.287514) | 0.283311 / 0.434364 (-0.151053) | 0.327857 / 0.540337 (-0.212481) | 0.413854 / 1.386936 (-0.973083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#46931085bd8a3fdbc63b68b5ee4b8f62029c7557 \"CML watermark\")\n" ]
Add `with_rank` param to `Dataset.filter`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6608/reactions" }
PR_kwDODunzps5ku_lN
{ "diff_url": "https://github.com/huggingface/datasets/pull/6608.diff", "html_url": "https://github.com/huggingface/datasets/pull/6608", "merged_at": "2024-01-29T16:36:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/6608.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6608" }
2024-01-22T15:19:16Z
https://api.github.com/repos/huggingface/datasets/issues/6608/comments
Fix #6564
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6608/timeline
closed
false
6,608
null
2024-01-29T16:36:53Z
null
true
2,091,766,063
https://api.github.com/repos/huggingface/datasets/issues/6607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6607/events
[]
null
2024-05-17T09:46:29Z
[]
https://github.com/huggingface/datasets/pull/6607
CONTRIBUTOR
null
false
null
[ "I think not all torch tensors should be converted to float, what if it's a tensor of integers for example ?\r\nMaybe you can check for the tensor dtype before converting", "@lhoestq Please could this be merged? 🙏", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005552 / 0.011353 (-0.005801) | 0.003707 / 0.011008 (-0.007301) | 0.063794 / 0.038508 (0.025286) | 0.031897 / 0.023109 (0.008788) | 0.263086 / 0.275898 (-0.012812) | 0.281184 / 0.323480 (-0.042296) | 0.003183 / 0.007986 (-0.004802) | 0.002681 / 0.004328 (-0.001648) | 0.050259 / 0.004250 (0.046009) | 0.048395 / 0.037052 (0.011342) | 0.266925 / 0.258489 (0.008436) | 0.298146 / 0.293841 (0.004305) | 0.027995 / 0.128546 (-0.100551) | 0.010689 / 0.075646 (-0.064957) | 0.204956 / 0.419271 (-0.214316) | 0.036453 / 0.043533 (-0.007080) | 0.255406 / 0.255139 (0.000267) | 0.271388 / 0.283200 (-0.011811) | 0.019748 / 0.141683 (-0.121935) | 1.103926 / 1.452155 (-0.348228) | 1.167250 / 1.492716 (-0.325466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100483 / 0.018006 (0.082477) | 0.307331 / 0.000490 (0.306841) | 0.000216 / 0.000200 (0.000016) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018918 / 0.037411 (-0.018493) | 0.062569 / 0.014526 (0.048044) | 0.074935 / 0.176557 (-0.101621) | 0.122590 / 0.737135 (-0.614545) | 0.076475 / 0.296338 (-0.219864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279001 / 0.215209 (0.063792) | 2.771630 / 2.077655 (0.693975) | 1.439666 / 1.504120 (-0.064454) | 1.303422 / 1.541195 (-0.237773) | 1.355670 / 1.468490 (-0.112820) | 0.576264 / 4.584777 (-4.008513) | 2.394868 / 3.745712 (-1.350844) | 2.941487 / 5.269862 (-2.328375) | 1.808733 / 4.565676 (-2.756943) | 0.063691 / 0.424275 (-0.360584) | 0.005399 / 0.007607 (-0.002208) | 0.335610 / 0.226044 (0.109566) | 3.295903 / 2.268929 (1.026974) | 1.771836 / 55.444624 (-53.672788) | 1.511246 / 6.876477 (-5.365231) | 1.535926 / 2.142072 (-0.606147) | 0.649020 / 4.805227 (-4.156207) | 0.119754 / 6.500664 (-6.380910) | 0.043319 / 0.075469 (-0.032150) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967275 / 1.841788 (-0.874513) | 12.358482 / 8.074308 (4.284174) | 9.933324 / 10.191392 (-0.258068) | 0.133565 / 0.680424 (-0.546859) | 0.015650 / 0.534201 (-0.518551) | 0.286978 / 0.579283 (-0.292305) | 0.262912 / 0.434364 (-0.171451) | 0.330335 / 0.540337 (-0.210002) | 0.427671 / 1.386936 (-0.959265) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005660 / 0.011353 (-0.005693) | 0.003908 / 0.011008 (-0.007101) | 0.051874 / 0.038508 (0.013366) | 0.033141 / 0.023109 (0.010032) | 0.270512 / 0.275898 (-0.005386) | 0.296790 / 0.323480 (-0.026690) | 0.004335 / 0.007986 (-0.003651) | 0.002842 / 0.004328 (-0.001487) | 0.078264 / 0.004250 (0.074014) | 0.044436 / 0.037052 (0.007384) | 0.283230 / 0.258489 (0.024741) | 0.318026 / 0.293841 (0.024185) | 0.031459 / 0.128546 (-0.097087) | 0.010710 / 0.075646 (-0.064937) | 0.058152 / 0.419271 (-0.361119) | 0.034021 / 0.043533 (-0.009512) | 0.269956 / 0.255139 (0.014817) | 0.288783 / 0.283200 (0.005583) | 0.019246 / 0.141683 (-0.122436) | 1.127264 / 1.452155 (-0.324891) | 1.169777 / 1.492716 (-0.322939) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.101523 / 0.018006 (0.083516) | 0.315120 / 0.000490 (0.314630) | 0.000218 / 0.000200 (0.000018) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023078 / 0.037411 (-0.014333) | 0.080021 / 0.014526 (0.065495) | 0.089574 / 0.176557 (-0.086982) | 0.131258 / 0.737135 (-0.605877) | 0.090604 / 0.296338 (-0.205734) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302197 / 0.215209 (0.086988) | 2.980071 / 2.077655 (0.902416) | 1.585480 / 1.504120 (0.081360) | 1.462904 / 1.541195 (-0.078291) | 1.501102 / 1.468490 (0.032612) | 0.580342 / 4.584777 (-4.004435) | 0.972118 / 3.745712 (-2.773594) | 2.930530 / 5.269862 (-2.339331) | 1.824132 / 4.565676 (-2.741545) | 0.064711 / 0.424275 (-0.359564) | 0.005084 / 0.007607 (-0.002523) | 0.352693 / 0.226044 (0.126649) | 3.522775 / 2.268929 (1.253847) | 1.965063 / 55.444624 (-53.479561) | 1.679250 / 6.876477 (-5.197226) | 1.711691 / 2.142072 (-0.430382) | 0.663719 / 4.805227 (-4.141509) | 0.119858 / 6.500664 (-6.380806) | 0.041744 / 0.075469 (-0.033725) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.017970 / 1.841788 (-0.823817) | 12.898917 / 8.074308 (4.824609) | 10.244728 / 10.191392 (0.053336) | 0.133860 / 0.680424 (-0.546564) | 0.016044 / 0.534201 (-0.518157) | 0.287543 / 0.579283 (-0.291740) | 0.126418 / 0.434364 (-0.307946) | 0.394970 / 0.540337 (-0.145368) | 0.420455 / 1.386936 (-0.966481) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b7d71ffeb10bc129f6f923cfadb5ccd9383b8033 \"CML watermark\")\n" ]
Update features.py to avoid bfloat16 unsupported error
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6607/reactions" }
PR_kwDODunzps5knGse
{ "diff_url": "https://github.com/huggingface/datasets/pull/6607.diff", "html_url": "https://github.com/huggingface/datasets/pull/6607", "merged_at": "2024-05-17T09:40:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/6607.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6607" }
2024-01-20T00:39:44Z
https://api.github.com/repos/huggingface/datasets/issues/6607/comments
Fixes https://github.com/huggingface/datasets/issues/6566 Let me know if there's any tests I need to clear.
{ "avatar_url": "https://avatars.githubusercontent.com/u/75697181?v=4", "events_url": "https://api.github.com/users/skaulintel/events{/privacy}", "followers_url": "https://api.github.com/users/skaulintel/followers", "following_url": "https://api.github.com/users/skaulintel/following{/other_user}", "gists_url": "https://api.github.com/users/skaulintel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/skaulintel", "id": 75697181, "login": "skaulintel", "node_id": "MDQ6VXNlcjc1Njk3MTgx", "organizations_url": "https://api.github.com/users/skaulintel/orgs", "received_events_url": "https://api.github.com/users/skaulintel/received_events", "repos_url": "https://api.github.com/users/skaulintel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/skaulintel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/skaulintel/subscriptions", "type": "User", "url": "https://api.github.com/users/skaulintel" }
https://api.github.com/repos/huggingface/datasets/issues/6607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6607/timeline
closed
false
6,607
null
2024-05-17T09:40:13Z
null
true
2,091,088,785
https://api.github.com/repos/huggingface/datasets/issues/6606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6606/events
[]
null
2024-01-26T15:11:38Z
[]
https://github.com/huggingface/datasets/pull/6606
COLLABORATOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005625 / 0.011353 (-0.005728) | 0.003313 / 0.011008 (-0.007695) | 0.063997 / 0.038508 (0.025489) | 0.028949 / 0.023109 (0.005839) | 0.250069 / 0.275898 (-0.025829) | 0.271412 / 0.323480 (-0.052068) | 0.003837 / 0.007986 (-0.004148) | 0.002632 / 0.004328 (-0.001697) | 0.048351 / 0.004250 (0.044100) | 0.040664 / 0.037052 (0.003612) | 0.267540 / 0.258489 (0.009051) | 0.285237 / 0.293841 (-0.008604) | 0.026962 / 0.128546 (-0.101584) | 0.010417 / 0.075646 (-0.065229) | 0.211430 / 0.419271 (-0.207842) | 0.035411 / 0.043533 (-0.008122) | 0.258867 / 0.255139 (0.003728) | 0.278562 / 0.283200 (-0.004638) | 0.017690 / 0.141683 (-0.123993) | 1.128813 / 1.452155 (-0.323342) | 1.169384 / 1.492716 (-0.323333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091322 / 0.018006 (0.073316) | 0.303272 / 0.000490 (0.302782) | 0.000202 / 0.000200 (0.000002) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.017551 / 0.037411 (-0.019861) | 0.060027 / 0.014526 (0.045502) | 0.073431 / 0.176557 (-0.103125) | 0.120550 / 0.737135 (-0.616585) | 0.073107 / 0.296338 (-0.223231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283064 / 0.215209 (0.067855) | 2.754593 / 2.077655 (0.676938) | 1.477303 / 1.504120 (-0.026817) | 1.341072 / 1.541195 (-0.200123) | 1.366625 / 1.468490 (-0.101865) | 0.573467 / 4.584777 (-4.011310) | 2.395225 / 3.745712 (-1.350487) | 2.777021 / 5.269862 (-2.492841) | 1.720733 / 4.565676 (-2.844944) | 0.063339 / 0.424275 (-0.360936) | 0.004954 / 0.007607 (-0.002653) | 0.350359 / 0.226044 (0.124315) | 3.376221 / 2.268929 (1.107293) | 1.835539 / 55.444624 (-53.609086) | 1.558064 / 6.876477 (-5.318413) | 1.582778 / 2.142072 (-0.559294) | 0.649918 / 4.805227 (-4.155309) | 0.117761 / 6.500664 (-6.382903) | 0.041771 / 0.075469 (-0.033698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950202 / 1.841788 (-0.891586) | 11.476160 / 8.074308 (3.401852) | 10.290618 / 10.191392 (0.099226) | 0.140659 / 0.680424 (-0.539765) | 0.014525 / 0.534201 (-0.519676) | 0.287253 / 0.579283 (-0.292030) | 0.266204 / 0.434364 (-0.168160) | 0.327818 / 0.540337 (-0.212519) | 0.431680 / 1.386936 (-0.955256) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005096 / 0.011353 (-0.006257) | 0.003460 / 0.011008 (-0.007548) | 0.049474 / 0.038508 (0.010966) | 0.031063 / 0.023109 (0.007954) | 0.272899 / 0.275898 (-0.002999) | 0.291859 / 0.323480 (-0.031621) | 0.004858 / 0.007986 (-0.003128) | 0.002598 / 0.004328 (-0.001731) | 0.049074 / 0.004250 (0.044824) | 0.044722 / 0.037052 (0.007669) | 0.285262 / 0.258489 (0.026772) | 0.314168 / 0.293841 (0.020327) | 0.046346 / 0.128546 (-0.082200) | 0.010384 / 0.075646 (-0.065262) | 0.058331 / 0.419271 (-0.360940) | 0.033728 / 0.043533 (-0.009805) | 0.276217 / 0.255139 (0.021078) | 0.295465 / 0.283200 (0.012265) | 0.018215 / 0.141683 (-0.123467) | 1.163847 / 1.452155 (-0.288308) | 1.213901 / 1.492716 (-0.278816) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091953 / 0.018006 (0.073947) | 0.299977 / 0.000490 (0.299487) | 0.000212 / 0.000200 (0.000012) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022031 / 0.037411 (-0.015381) | 0.075067 / 0.014526 (0.060541) | 0.087305 / 0.176557 (-0.089251) | 0.125530 / 0.737135 (-0.611605) | 0.088761 / 0.296338 (-0.207578) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302682 / 0.215209 (0.087473) | 2.941509 / 2.077655 (0.863854) | 1.643399 / 1.504120 (0.139280) | 1.530148 / 1.541195 (-0.011046) | 1.542067 / 1.468490 (0.073577) | 0.575883 / 4.584777 (-4.008894) | 2.434320 / 3.745712 (-1.311392) | 2.761683 / 5.269862 (-2.508179) | 1.732068 / 4.565676 (-2.833609) | 0.063543 / 0.424275 (-0.360732) | 0.005089 / 0.007607 (-0.002518) | 0.351314 / 0.226044 (0.125269) | 3.494572 / 2.268929 (1.225643) | 2.032503 / 55.444624 (-53.412121) | 1.697949 / 6.876477 (-5.178528) | 1.700392 / 2.142072 (-0.441680) | 0.650757 / 4.805227 (-4.154471) | 0.116719 / 6.500664 (-6.383945) | 0.040559 / 0.075469 (-0.034910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.978218 / 1.841788 (-0.863570) | 11.972379 / 8.074308 (3.898071) | 10.725735 / 10.191392 (0.534343) | 0.130564 / 0.680424 (-0.549860) | 0.015396 / 0.534201 (-0.518805) | 0.286900 / 0.579283 (-0.292383) | 0.279633 / 0.434364 (-0.154730) | 0.327483 / 0.540337 (-0.212854) | 0.417848 / 1.386936 (-0.969088) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#adfe8f8fa37b9f220c152f5b8b2473ba2cef0307 \"CML watermark\")\n" ]
Dedicated RNG object for fingerprinting
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6606/reactions" }
PR_kwDODunzps5kk3KB
{ "diff_url": "https://github.com/huggingface/datasets/pull/6606.diff", "html_url": "https://github.com/huggingface/datasets/pull/6606", "merged_at": "2024-01-26T15:05:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6606.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6606" }
2024-01-19T18:34:47Z
https://api.github.com/repos/huggingface/datasets/issues/6606/comments
Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/6606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6606/timeline
closed
false
6,606
null
2024-01-26T15:05:34Z
null
true
2,090,188,376
https://api.github.com/repos/huggingface/datasets/issues/6605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6605/events
[]
null
2024-02-01T17:58:23Z
[]
https://github.com/huggingface/datasets/issues/6605
NONE
completed
null
null
[ "Addressed in https://github.com/huggingface/transformers/pull/28715." ]
ELI5 no longer available, but referenced in example code
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/6605/reactions" }
I_kwDODunzps58lb5Y
null
2024-01-19T10:21:52Z
https://api.github.com/repos/huggingface/datasets/issues/6605/comments
Here, an example code is given: https://huggingface.co/docs/transformers/tasks/language_modeling This code + article references the ELI5 dataset. ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5 "Defunct: Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data. Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable. " Please change the example code to use a different dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/81480344?v=4", "events_url": "https://api.github.com/users/drdsgvo/events{/privacy}", "followers_url": "https://api.github.com/users/drdsgvo/followers", "following_url": "https://api.github.com/users/drdsgvo/following{/other_user}", "gists_url": "https://api.github.com/users/drdsgvo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/drdsgvo", "id": 81480344, "login": "drdsgvo", "node_id": "MDQ6VXNlcjgxNDgwMzQ0", "organizations_url": "https://api.github.com/users/drdsgvo/orgs", "received_events_url": "https://api.github.com/users/drdsgvo/received_events", "repos_url": "https://api.github.com/users/drdsgvo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/drdsgvo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drdsgvo/subscriptions", "type": "User", "url": "https://api.github.com/users/drdsgvo" }
https://api.github.com/repos/huggingface/datasets/issues/6605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6605/timeline
closed
false
6,605
null
2024-02-01T17:58:22Z
null
false
2,089,713,945
https://api.github.com/repos/huggingface/datasets/issues/6604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6604/events
[]
null
2024-01-26T15:05:35Z
[]
https://github.com/huggingface/datasets/issues/6604
NONE
completed
null
null
[ "I've opened a PR with a fix.", "I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html" ]
Transform fingerprint collisions due to setting fixed random seed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6604/reactions" }
I_kwDODunzps58joEZ
null
2024-01-19T06:32:25Z
https://api.github.com/repos/huggingface/datasets/issues/6604/comments
### Describe the bug The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random seed, which is common practice: https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_full.yaml#L45. This results in fingerprint collisions which leads to silently loading incorrect cache files corresponding to completely different datasets. ### Steps to reproduce the bug n/a ### Expected behavior Use `uuid` v4 instead of `random.getrandbits()` ### Environment info `datasets` main branch
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687910?v=4", "events_url": "https://api.github.com/users/normster/events{/privacy}", "followers_url": "https://api.github.com/users/normster/followers", "following_url": "https://api.github.com/users/normster/following{/other_user}", "gists_url": "https://api.github.com/users/normster/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/normster", "id": 6687910, "login": "normster", "node_id": "MDQ6VXNlcjY2ODc5MTA=", "organizations_url": "https://api.github.com/users/normster/orgs", "received_events_url": "https://api.github.com/users/normster/received_events", "repos_url": "https://api.github.com/users/normster/repos", "site_admin": false, "starred_url": "https://api.github.com/users/normster/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/normster/subscriptions", "type": "User", "url": "https://api.github.com/users/normster" }
https://api.github.com/repos/huggingface/datasets/issues/6604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6604/timeline
closed
false
6,604
null
2024-01-26T15:05:35Z
null
false
2,089,230,766
https://api.github.com/repos/huggingface/datasets/issues/6603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6603/events
[]
null
2024-01-28T04:01:15Z
[]
https://github.com/huggingface/datasets/issues/6603
NONE
null
null
null
[ "Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?", "```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/filename\") # this failed\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-folder/\") # this failed\r\n\r\n\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp/whatever-folder/tmp1_izxvoo'\r\n```\r\n\r\nIt will fail if the filename parents do not exists. If we have `os.makedirs(\"/tmp/whatever-folder\")`, then it worked.\r\n\r\nMaybe add the `mkdir -p` into the map function?" ]
datasets map `cache_file_name` does not work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6603/reactions" }
I_kwDODunzps58hyGu
null
2024-01-18T23:08:30Z
https://api.github.com/repos/huggingface/datasets/issues/6603/comments
### Describe the bug In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work. ### Steps to reproduce the bug 1. pick a dataset 2. write a map function 3. do `ds.map(..., cache_file_name='some_filename')` 4. it crashes ### Expected behavior It will tell you the filename you specified does not exist or it will generate a new file and tell you the filename does not exist. ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.12.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4", "events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}", "followers_url": "https://api.github.com/users/ChenchaoZhao/followers", "following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}", "gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenchaoZhao", "id": 35147961, "login": "ChenchaoZhao", "node_id": "MDQ6VXNlcjM1MTQ3OTYx", "organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs", "received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events", "repos_url": "https://api.github.com/users/ChenchaoZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenchaoZhao" }
https://api.github.com/repos/huggingface/datasets/issues/6603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6603/timeline
open
false
6,603
null
null
null
false
2,089,217,483
https://api.github.com/repos/huggingface/datasets/issues/6602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6602/events
[]
null
2024-01-18T23:00:47Z
[]
https://github.com/huggingface/datasets/issues/6602
NONE
null
null
null
[]
Index error when data is large
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6602/reactions" }
I_kwDODunzps58hu3L
null
2024-01-18T23:00:47Z
https://api.github.com/repos/huggingface/datasets/issues/6602/comments
### Describe the bug At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is `total_size / min(max_shard_size, row_size)` which should be `total_size / max(max_shard_size, row_size)` The fix is setting a larger `max_shard_size` ### Steps to reproduce the bug 1. create a dataset with large dense tensors per row 2. set a small `max_shard_size` say 1MB 3. `save_to_disk` ### Expected behavior ``` raise IndexError(f"Index {index} out of range for dataset of size {size}.") IndexError: Index 10 out of range for dataset of size 10. ``` ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.12.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/35147961?v=4", "events_url": "https://api.github.com/users/ChenchaoZhao/events{/privacy}", "followers_url": "https://api.github.com/users/ChenchaoZhao/followers", "following_url": "https://api.github.com/users/ChenchaoZhao/following{/other_user}", "gists_url": "https://api.github.com/users/ChenchaoZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ChenchaoZhao", "id": 35147961, "login": "ChenchaoZhao", "node_id": "MDQ6VXNlcjM1MTQ3OTYx", "organizations_url": "https://api.github.com/users/ChenchaoZhao/orgs", "received_events_url": "https://api.github.com/users/ChenchaoZhao/received_events", "repos_url": "https://api.github.com/users/ChenchaoZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ChenchaoZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ChenchaoZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/ChenchaoZhao" }
https://api.github.com/repos/huggingface/datasets/issues/6602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6602/timeline
open
false
6,602
null
null
null
false
2,088,624,054
https://api.github.com/repos/huggingface/datasets/issues/6601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6601/events
[]
null
2024-02-08T14:33:10Z
[]
https://github.com/huggingface/datasets/pull/6601
NONE
null
false
null
[ "Hi ! The metrics in `datasets` are deprecated in favor of https://github.com/huggingface/evaluate\r\n\r\nYou can open a PR here instead: https://huggingface.co/spaces/evaluate-metric/squad_v2/tree/main" ]
add safety checks when using only part of dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6601/reactions" }
PR_kwDODunzps5kcWN0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6601.diff", "html_url": "https://github.com/huggingface/datasets/pull/6601", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6601.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6601" }
2024-01-18T16:16:59Z
https://api.github.com/repos/huggingface/datasets/issues/6601/comments
Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/63422923?v=4", "events_url": "https://api.github.com/users/benseddikismail/events{/privacy}", "followers_url": "https://api.github.com/users/benseddikismail/followers", "following_url": "https://api.github.com/users/benseddikismail/following{/other_user}", "gists_url": "https://api.github.com/users/benseddikismail/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benseddikismail", "id": 63422923, "login": "benseddikismail", "node_id": "MDQ6VXNlcjYzNDIyOTIz", "organizations_url": "https://api.github.com/users/benseddikismail/orgs", "received_events_url": "https://api.github.com/users/benseddikismail/received_events", "repos_url": "https://api.github.com/users/benseddikismail/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benseddikismail/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benseddikismail/subscriptions", "type": "User", "url": "https://api.github.com/users/benseddikismail" }
https://api.github.com/repos/huggingface/datasets/issues/6601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6601/timeline
open
false
6,601
null
null
null
true
2,088,446,385
https://api.github.com/repos/huggingface/datasets/issues/6600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6600/events
[]
null
2024-01-23T14:42:32Z
[]
https://github.com/huggingface/datasets/issues/6600
NONE
null
null
null
[ "Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:\r\n```python\r\ntest_dataset = load_dataset(\"opus100\", name=\"en-fr\", split=\"test\")\r\n\r\n# Save with .to_parquet()\r\ntest_parquet_path = \"try_testset_save.parquet\"\r\ntest_dataset.to_parquet(test_parquet_path)\r\n\r\n# Load dataset from the Parquet\r\nloaded_dataset = load_dataset(\"parquet\", data_files=test_parquet_path)\r\nprint(test_dataset_fromfile[0][\"translation\"])\r\nprint(test_dataset_fromfile[0][\"translation\"][\"en\"])\r\n```", "Indeed this works great, thank you !" ]
Loading CSV exported dataset has unexpected format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6600/reactions" }
I_kwDODunzps58eymx
null
2024-01-18T14:48:27Z
https://api.github.com/repos/huggingface/datasets/issues/6600/comments
### Describe the bug I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected. ### Steps to reproduce the bug The documentation I've mainly consulted is https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/loading_methods#datasets.load_dataset and https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset (where I've found `.to_csv()`) ```python # Load a dataset of translations test_dataset = load_dataset("opus100", name="en-fr", split="test") # Save with .to_csv() test_csv_path = "try_testset_save.csv" test_dataset.to_csv(test_csv_path) # Load dataset from the CSV loaded_dataset = load_dataset("csv", data_files=test_csv_path) print(test_dataset_fromfile[0]["translation"]) print(test_dataset_fromfile[0]["translation"]["en"]) ``` ``` Creating CSV from Arrow format: 100% 2/2 [00:00<00:00, 47.99ba/s] Downloading data files: 100% 1/1 [00:00<00:00, 65.33it/s] Extracting data files: 100% 1/1 [00:00<00:00, 42.10it/s] Generating train split: 2000/0 [00:00<00:00, 47486.09 examples/s] {'en': "She wasn't going to vaccinate her kid against polio, no way.", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'} --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[29], line 11 9 loaded_dataset = load_dataset("csv", data_files=test_csv_path) 10 print(test_dataset_fromfile[0]["translation"]) ---> 11 print(test_dataset_fromfile[0]["translation"]["en"]) TypeError: string indices must be integers, not 'str' ``` ### Expected behavior Each translation was saved as a stringified dict like `"{'en': ""She wasn't going to vaccinate her kid against polio, no way."", 'fr': 'Elle ne vaccinerait pas son enfant contre la polio. Pas question.'}"` where I would have expected 2 columns (1st with English segments, and 2nd with French segments), and I was expecting `load_dataset` to infer the type of feature automatically as I haven't seen anything about it in the documentation. Do you have an example of how to effectively save and load datasets of translations ? ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.11.5 - `huggingface_hub` version: 0.16.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/59572247?v=4", "events_url": "https://api.github.com/users/OrianeN/events{/privacy}", "followers_url": "https://api.github.com/users/OrianeN/followers", "following_url": "https://api.github.com/users/OrianeN/following{/other_user}", "gists_url": "https://api.github.com/users/OrianeN/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/OrianeN", "id": 59572247, "login": "OrianeN", "node_id": "MDQ6VXNlcjU5NTcyMjQ3", "organizations_url": "https://api.github.com/users/OrianeN/orgs", "received_events_url": "https://api.github.com/users/OrianeN/received_events", "repos_url": "https://api.github.com/users/OrianeN/repos", "site_admin": false, "starred_url": "https://api.github.com/users/OrianeN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OrianeN/subscriptions", "type": "User", "url": "https://api.github.com/users/OrianeN" }
https://api.github.com/repos/huggingface/datasets/issues/6600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6600/timeline
open
false
6,600
null
null
null
false
2,086,684,664
https://api.github.com/repos/huggingface/datasets/issues/6599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6599/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2024-01-23T10:42:17Z
[]
https://github.com/huggingface/datasets/issues/6599
NONE
not_planned
null
null
[ "Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.", "That's fair. Thanks" ]
Easy way to segment into 30s snippets given an m4a file and a vtt file
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6599/reactions" }
I_kwDODunzps58YEf4
null
2024-01-17T17:51:40Z
https://api.github.com/repos/huggingface/datasets/issues/6599/comments
### Feature request Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already). ### Motivation It's easy to create a vtt file from an audio file. If there could be auto-segmenting, this would make the creation of datasets much faster. ### Your contribution I have made a custom script to do this but it's not all that clean - uses librosa and pydub.
{ "avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4", "events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}", "followers_url": "https://api.github.com/users/RonanKMcGovern/followers", "following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}", "gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RonanKMcGovern", "id": 78278410, "login": "RonanKMcGovern", "node_id": "MDQ6VXNlcjc4Mjc4NDEw", "organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs", "received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events", "repos_url": "https://api.github.com/users/RonanKMcGovern/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions", "type": "User", "url": "https://api.github.com/users/RonanKMcGovern" }
https://api.github.com/repos/huggingface/datasets/issues/6599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6599/timeline
closed
false
6,599
null
2024-01-22T15:35:49Z
null
false
2,084,236,605
https://api.github.com/repos/huggingface/datasets/issues/6598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6598/events
[]
null
2024-07-23T14:30:10Z
[]
https://github.com/huggingface/datasets/issues/6598
NONE
completed
null
null
[ "I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. ", "same thing happened to other formats like parquet", "I am facing similar issue while reading a parquet file from s3.\r\ni try with every version between 2.14 to 2.16.1 but it dosen't work ", "Re-define the DownloadConfig might work:\r\n\r\n```\r\nclass ReviseDownloadConfig(DownloadConfig):\r\n def __post_init__(self, use_auth_token):\r\n if use_auth_token != \"deprecated\":\r\n warnings.warn(\r\n \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n FutureWarning,\r\n )\r\n self.token = use_auth_token\r\n\r\n def copy(self):\r\n return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})\r\n\r\ndownloadconfig = ReviseDownloadConfig()\r\n```\r\n", "> Re-define the DownloadConfig might work:\r\n> \r\n> ```\r\n> class ReviseDownloadConfig(DownloadConfig):\r\n> def __post_init__(self, use_auth_token):\r\n> if use_auth_token != \"deprecated\":\r\n> warnings.warn(\r\n> \"'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.\\n\"\r\n> f\"You can remove this warning by passing 'token={use_auth_token}' instead.\",\r\n> FutureWarning,\r\n> )\r\n> self.token = use_auth_token\r\n> ```\r\nThis seemed to work for me.\r\n", "use pandas and then convert to `Dataset`", "I am currently facing the same issue while using a custom loading script with files located in a remote S3 instance. I was using the `download_custom` functionality but now it is deprecated mentioning that I should use the native S3 loading, which is not working. \r\n\r\nAs stated before, the library forces the existence of a `hf` key in the `storage_options` variable, which is **not** accepted by `s3fs` : \r\n\r\n```python\r\n.../site-packages/s3fs/core.py\", line 516, in set_session\r\n self.session = aiobotocore.session.AioSession(**self.kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'hf'.\r\n````\r\n\r\nMeanwhile, if my `storage_options` var stays like:\r\n```python\r\n{'key': '...',\r\n 'secret': '...',\r\n 'client_kwargs': {'endpoint_url': '...'}}\r\n```\r\nit works alright. " ]
Unexpected keyword argument 'hf' when downloading CSV dataset from S3
{ "+1": 9, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 9, "url": "https://api.github.com/repos/huggingface/datasets/issues/6598/reactions" }
I_kwDODunzps58Ou09
null
2024-01-16T15:16:01Z
https://api.github.com/repos/huggingface/datasets/issues/6598/comments
### Describe the bug I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`: ``` TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-with-unexpected-keyword-argument-error-in Full stacktrace: ``` .../site-packages/datasets/load.py:2549: in load_dataset builder_instance.download_and_prepare( .../site-packages/datasets/builder.py:1005: in download_and_prepare self._download_and_prepare( .../site-packages/datasets/builder.py:1078: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .../site-packages/datasets/packaged_modules/csv/csv.py:147: in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) .../site-packages/datasets/download/download_manager.py:562: in download_and_extract return self.extract(self.download(url_or_urls)) .../site-packages/datasets/download/download_manager.py:426: in download downloaded_path_or_paths = map_nested( .../site-packages/datasets/utils/py_utils.py:466: in map_nested mapped = [ .../site-packages/datasets/utils/py_utils.py:467: in <listcomp> _single_map_nested((function, obj, types, None, True, None)) .../site-packages/datasets/utils/py_utils.py:387: in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] .../site-packages/datasets/utils/py_utils.py:387: in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] .../site-packages/datasets/utils/py_utils.py:370: in _single_map_nested return function(data_struct) .../site-packages/datasets/download/download_manager.py:451: in _download out = cached_path(url_or_filename, download_config=download_config) .../site-packages/datasets/utils/file_utils.py:188: in cached_path output_path = get_from_cache( ...1/site-packages/datasets/utils/file_utils.py:511: in get_from_cache response = fsspec_head(url, storage_options=storage_options) .../site-packages/datasets/utils/file_utils.py:316: in fsspec_head fs, _, paths = fsspec.get_fs_token_paths(url, storage_options=storage_options) .../site-packages/fsspec/core.py:622: in get_fs_token_paths fs = filesystem(protocol, **inkwargs) .../site-packages/fsspec/registry.py:290: in filesystem return cls(**storage_options) .../site-packages/fsspec/spec.py:79: in __call__ obj = super().__call__(*args, **kwargs) .../site-packages/s3fs/core.py:187: in __init__ self.s3 = self.connect() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x1500a1310>, refresh = True def connect(self, refresh=True): """ Establish S3 connection object. Parameters ---------- refresh : bool Whether to create new session/client, even if a previous one with the same parameters already exists. If False (default), an existing one will be used if possible """ if refresh is False: # back compat: we store whole FS instance now return self.s3 anon, key, secret, kwargs, ckwargs, token, ssl = ( self.anon, self.key, self.secret, self.kwargs, self.client_kwargs, self.token, self.use_ssl) if not self.passed_in_session: > self.session = botocore.session.Session(**self.kwargs) E TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` ### Steps to reproduce the bug 1. Assuming a valid CSV file located at `s3://bucket/data.csv` 2. Run the below code: ``` storage_options = { "key": "...", "secret": "...", "client_kwargs": { "endpoint_url": "...", } } load_dataset("csv", data_files="s3://bucket/data.csv", storage_options=storage_options) ``` Encountered in version `2.16.1` but also reproduced in `2.16.0` and `2.15.0`. Note: I encountered this in a unit test using a `moto` mock for S3, however since the error occurs before the session is instantiated, it should not be the issue. ### Expected behavior No exception is raised, the boto3 session is created successfully, and the CSV file is downloaded successfully and returned as a dataset. === After some research I found that `DownloadConfig` has a `__post_init__` method that always forces this value to be set in its `storage_options`, even though in case of an S3 location the storage options get passed on to the S3 Session which does not expect this parameter. I assume this parameter is needed when reading from the huggingface hub and should not be set in this context. Unfortunately there is nothing the user can do to work around it. Even if you manually do something like: ``` download_config = DownloadConfig() del download_config.storage_options["hf"] load_dataset("csv", data_files="s3://bucket/data.csv", download_config=download_config) ``` the library will still reinsert this parameter when `download_config = self.download_config.copy()` in line 418 of `download_manager.py` (`DownloadManager.download`). Therefore `load_dataset` currently cannot be used to read a dataset in CSV format from an S3 location. ### Environment info - `datasets` version: 2.16.1 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.11.7 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5592111?v=4", "events_url": "https://api.github.com/users/dguenms/events{/privacy}", "followers_url": "https://api.github.com/users/dguenms/followers", "following_url": "https://api.github.com/users/dguenms/following{/other_user}", "gists_url": "https://api.github.com/users/dguenms/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dguenms", "id": 5592111, "login": "dguenms", "node_id": "MDQ6VXNlcjU1OTIxMTE=", "organizations_url": "https://api.github.com/users/dguenms/orgs", "received_events_url": "https://api.github.com/users/dguenms/received_events", "repos_url": "https://api.github.com/users/dguenms/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dguenms/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dguenms/subscriptions", "type": "User", "url": "https://api.github.com/users/dguenms" }
https://api.github.com/repos/huggingface/datasets/issues/6598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6598/timeline
closed
false
6,598
null
2024-07-23T14:30:10Z
null
false
2,083,708,521
https://api.github.com/repos/huggingface/datasets/issues/6597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6597/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2024-02-05T12:29:37Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/6597
MEMBER
completed
null
null
[ "It is caused by these code lines: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1688-L1694", "Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1582-L1585\r\n\r\n> Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.\r\n\r\nThis behavior was \"reverted\" by the PR: \r\n- #6519\r\n\r\nWe have therefore contradictory requirements. We should decide:\r\n- whether to support passing dataset_namespace without user/org that defaults to the logged-in user (and not support canonical datasets)\r\n- or vice-versa, to support canonical datasets and not support passing only dataset_name\r\n\r\nAs canonical datasets are \"deprecated\" (and will eventually disappear), I would choose the first option. However, if so, the Space to convert datasets to Parquet will not work for canonical datasets: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet", "IIUC, this could also be \"fixed\" by `create_repo(\"dataset_name\")` not defaulting to `create_repo(\"user/dataset_name\")` (when the user's token is available), which would be consistent with the rest of the `HfApi` ops used in the `push_to_hub` implementation. This is a (small) breaking change for `huggingface_hub`, but justified to make the API more consistent.", "I tag @Wauplin to have his opinion as well.", "Hmm, creating repo with implicit namespace (e.g. `create_repo(\"dataset_name\")`) is a convenient feature used in a lot of integrations. It is not consistent with other HfApi methods specifically because it is the method to create repos. Once the repo is created, the return value provides the explicit repo_id (`namespace/repo_name`) that has to be passed to every `HfApi` method. Otherwise, libraries/scripts would often need to do a `whoami` call to get the namespace before creating a repo.\r\n\r\n Another solution for https://github.com/huggingface/datasets/issues/6597#issuecomment-1893746690 could be that implicit namespace is allowed (same as today) except if the `repo_id` is in a hard-coded list of canonical datasets. This list can be maintained automatically and should be slowly decreasing. **Caveat:** as a normal user I wouldn't be able to implicitly push to `imagenet-1k` if I wanted to push to `Wauplin/imagenet-1k`. Shouldn't be too problematic, no? Worse case, would need to add a `whoami` call and allow implicit-canonical-name for non-HF users for instance (a bit too over-engineered IMO but doable). ", "As canonical datasets are going to disappear in the following couple of months, I would not make any effort on their support.\r\n\r\nI propose reverting #6519, so that the behavior of `push_to_hub` is aligned with the one described in its dosctring: \"Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.\"\r\n\r\nI'm opening a PR." ]
Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6597/reactions" }
I_kwDODunzps58Mt5p
null
2024-01-16T11:27:07Z
https://api.github.com/repos/huggingface/datasets/issues/6597/comments
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace. ## Steps to reproduce the bug The command: ```python commit_info = ds.push_to_hub( "caner", config_name="default", commit_message="Convert dataset to Parquet", commit_description="Convert dataset to Parquet.", create_pr=True, token=token, ) ``` creates the additional dataset `albertvillanova/caner`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/6597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6597/timeline
closed
false
6,597
null
2024-02-05T12:29:37Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false