state
stringclasses
2 values
created_at
stringlengths
20
20
active_lock_reason
null
url
stringlengths
61
61
assignee
dict
reactions
dict
draft
bool
2 classes
labels_url
stringlengths
75
75
user
dict
html_url
stringlengths
49
51
assignees
list
locked
bool
1 class
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
βŒ€
milestone
dict
comments
sequence
state_reason
stringclasses
3 values
labels
list
title
stringlengths
1
290
author_association
stringclasses
3 values
timeline_url
stringlengths
70
70
body
stringlengths
0
228k
βŒ€
repository_url
stringclasses
1 value
pull_request
dict
id
int64
773M
2.11B
comments_url
stringlengths
70
70
node_id
stringlengths
18
32
performed_via_github_app
null
number
int64
1.62k
6.64k
events_url
stringlengths
68
68
is_pull_request
bool
2 classes
closed
2023-04-30T13:23:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/5810
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4", "events_url": "https://api.github.com/users/yuukicammy/events{/privacy}", "followers_url": "https://api.github.com/users/yuukicammy/followers", "following_url": "https://api.github.com/users/yuukicammy/following{/other_user}", "gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuukicammy", "id": 3927621, "login": "yuukicammy", "node_id": "MDQ6VXNlcjM5Mjc2MjE=", "organizations_url": "https://api.github.com/users/yuukicammy/orgs", "received_events_url": "https://api.github.com/users/yuukicammy/received_events", "repos_url": "https://api.github.com/users/yuukicammy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions", "type": "User", "url": "https://api.github.com/users/yuukicammy" }
https://github.com/huggingface/datasets/pull/5810
[]
false
2023-05-22T08:12:39Z
2023-05-22T08:05:31Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.", "- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed that the test passes.\r\n\r\nPlease check the contents. @lhoestq \r\n\r\n5715a7e64bdd2951e6705aee58d592392e1538d6", "Cool ! You can run `make style` to fix code formatting to fix the ci", "I had forgotten about it. I did it. @lhoestq \r\n00248926a37c6f1387614aa388c36fdc105a59f5", "Thanks for putting this together @yuukicammy ! Looking forward to using this new addition ASAP. \r\n@lhoestq - sorry to bother you with this, but if this looks good to you, any chance we could get this merged in? \r\n\r\nThanks again to you both! ", "Yup there's just one test to remove and we can merge", "Sorry for my understanding wrong! Correspondence has been addressed. @lhoestq \r\n ca511b7b29fdde51ffd69b58bda79220472e9e94\r\n\r\nThanks for your comment! @brianhill11 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006788 / 0.011353 (-0.004564) | 0.004372 / 0.011008 (-0.006636) | 0.097746 / 0.038508 (0.059238) | 0.034858 / 0.023109 (0.011749) | 0.298122 / 0.275898 (0.022224) | 0.335272 / 0.323480 (0.011792) | 0.005810 / 0.007986 (-0.002175) | 0.004944 / 0.004328 (0.000616) | 0.072352 / 0.004250 (0.068101) | 0.041730 / 0.037052 (0.004678) | 0.316482 / 0.258489 (0.057992) | 0.338710 / 0.293841 (0.044869) | 0.027975 / 0.128546 (-0.100571) | 0.008746 / 0.075646 (-0.066901) | 0.329336 / 0.419271 (-0.089935) | 0.051327 / 0.043533 (0.007794) | 0.300695 / 0.255139 (0.045556) | 0.322813 / 0.283200 (0.039613) | 0.101133 / 0.141683 (-0.040550) | 1.422767 / 1.452155 (-0.029388) | 1.538364 / 1.492716 (0.045648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.016698 / 0.018006 (-0.001308) | 0.447042 / 0.000490 (0.446552) | 0.007609 / 0.000200 (0.007409) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026732 / 0.037411 (-0.010679) | 0.108295 / 0.014526 (0.093769) | 0.116905 / 0.176557 (-0.059652) | 0.173166 / 0.737135 (-0.563969) | 0.122560 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394893 / 0.215209 (0.179683) | 3.950314 / 2.077655 (1.872659) | 1.780576 / 1.504120 (0.276456) | 1.579855 / 1.541195 (0.038660) | 1.711197 / 1.468490 (0.242707) | 0.521469 / 4.584777 (-4.063308) | 3.838850 / 3.745712 (0.093138) | 3.101095 / 5.269862 (-2.168767) | 1.531574 / 4.565676 (-3.034102) | 0.065291 / 0.424275 (-0.358984) | 0.011979 / 0.007607 (0.004372) | 0.496543 / 0.226044 (0.270498) | 4.965446 / 2.268929 (2.696517) | 2.250788 / 55.444624 (-53.193837) | 1.923231 / 6.876477 (-4.953245) | 2.075372 / 2.142072 (-0.066700) | 0.638708 / 4.805227 (-4.166519) | 0.142048 / 6.500664 (-6.358616) | 0.064225 / 0.075469 (-0.011244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211799 / 1.841788 (-0.629989) | 14.791822 / 8.074308 (6.717514) | 14.274993 / 10.191392 (4.083601) | 0.163942 / 0.680424 (-0.516482) | 0.017541 / 0.534201 (-0.516660) | 0.396440 / 0.579283 (-0.182843) | 0.427502 / 0.434364 (-0.006861) | 0.494273 / 0.540337 (-0.046064) | 0.586877 / 1.386936 (-0.800059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004506) | 0.004854 / 0.011008 (-0.006154) | 0.075654 / 0.038508 (0.037146) | 0.034295 / 0.023109 (0.011186) | 0.378095 / 0.275898 (0.102197) | 0.407833 / 0.323480 (0.084353) | 0.006155 / 0.007986 (-0.001830) | 0.004259 / 0.004328 (-0.000070) | 0.076195 / 0.004250 (0.071944) | 0.051901 / 0.037052 (0.014849) | 0.375027 / 0.258489 (0.116538) | 0.428189 / 0.293841 (0.134348) | 0.028814 / 0.128546 (-0.099733) | 0.009209 / 0.075646 (-0.066438) | 0.083681 / 0.419271 (-0.335591) | 0.049158 / 0.043533 (0.005625) | 0.366669 / 0.255139 (0.111530) | 0.388767 / 0.283200 (0.105568) | 0.107837 / 0.141683 (-0.033845) | 1.476354 / 1.452155 (0.024199) | 1.580160 / 1.492716 (0.087443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218900 / 0.018006 (0.200894) | 0.445475 / 0.000490 (0.444985) | 0.000423 / 0.000200 (0.000223) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029740 / 0.037411 (-0.007671) | 0.115192 / 0.014526 (0.100666) | 0.122439 / 0.176557 (-0.054118) | 0.170639 / 0.737135 (-0.566496) | 0.128085 / 0.296338 (-0.168254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437745 / 0.215209 (0.222536) | 4.385695 / 2.077655 (2.308040) | 2.189893 / 1.504120 (0.685773) | 2.023160 / 1.541195 (0.481965) | 2.112798 / 1.468490 (0.644308) | 0.522497 / 4.584777 (-4.062280) | 3.881356 / 3.745712 (0.135644) | 3.206090 / 5.269862 (-2.063772) | 1.308241 / 4.565676 (-3.257435) | 0.065635 / 0.424275 (-0.358640) | 0.012288 / 0.007607 (0.004681) | 0.537265 / 0.226044 (0.311220) | 5.361641 / 2.268929 (3.092712) | 2.638941 / 55.444624 (-52.805684) | 2.344717 / 6.876477 (-4.531759) | 2.437619 / 2.142072 (0.295546) | 0.645079 / 4.805227 (-4.160149) | 0.143852 / 6.500664 (-6.356812) | 0.065796 / 0.075469 (-0.009673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276588 / 1.841788 (-0.565200) | 15.239396 / 8.074308 (7.165088) | 13.150591 / 10.191392 (2.959199) | 0.163635 / 0.680424 (-0.516789) | 0.017533 / 0.534201 (-0.516668) | 0.397659 / 0.579283 (-0.181624) | 0.425589 / 0.434364 (-0.008774) | 0.466570 / 0.540337 (-0.073768) | 0.563953 / 1.386936 (-0.822983) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#807d5c5ed4f8db7761b92bed498b2193acce8fb7 \"CML watermark\")\n" ]
null
[]
Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5810/timeline
# Overview I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes. # Details Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly. Added `fn_kwargs` to the following classes and methods (description of the argument is also added). 1. class `FilteredExamplesIterable` 2. method `filter` of class `IterableDataset` 3. method `map` of class `IterableDatasetDict` 4. method `filter` of class `IterableDatasetDict` # Example of changes Here's an example of how to use the new functionality: ```python from datasets import IterableDatasetDict def preprocess_function(example, a=None, b=None): # do something return example dataset = IterableDatasetDict(...) dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2}) ``` # Related Issues This pull request is related to the following issue: https://github.com/huggingface/datasets/issues/3444 . # Testing I have added unit tests to test the new functionality. In test_iterable_dataset.py - Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details). - Added `test_iterable_dataset_filter` for [2](#details). - Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested. In test_dataset_dict.py - Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details). - Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details). - Added `test_iterable_map` for [3](#details). - Added `test_iterable_filter` for [4](#details). Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py). # Checklist - [x] Format the code. - [x] Added tests. - [x] Passed tests locally.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5810.diff", "html_url": "https://github.com/huggingface/datasets/pull/5810", "merged_at": "2023-05-22T08:05:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/5810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5810" }
1,689,917,822
https://api.github.com/repos/huggingface/datasets/issues/5810/comments
PR_kwDODunzps5PdJHI
null
5,810
https://api.github.com/repos/huggingface/datasets/issues/5810/events
true
closed
2023-04-30T06:12:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/5809
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5809/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5809/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/64122846?v=4", "events_url": "https://api.github.com/users/yulgok22/events{/privacy}", "followers_url": "https://api.github.com/users/yulgok22/followers", "following_url": "https://api.github.com/users/yulgok22/following{/other_user}", "gists_url": "https://api.github.com/users/yulgok22/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yulgok22", "id": 64122846, "login": "yulgok22", "node_id": "MDQ6VXNlcjY0MTIyODQ2", "organizations_url": "https://api.github.com/users/yulgok22/orgs", "received_events_url": "https://api.github.com/users/yulgok22/received_events", "repos_url": "https://api.github.com/users/yulgok22/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yulgok22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yulgok22/subscriptions", "type": "User", "url": "https://api.github.com/users/yulgok22" }
https://github.com/huggingface/datasets/issues/5809
[]
false
2023-07-21T14:11:00Z
2023-07-21T14:11:00Z
null
[ "Hi ! I don't remember exactly how it was done, but maybe you have to embed `f\"{title}<sep>{text}\"` ?\r\n\r\nUsing a HF tokenizer it corresponds to doing\r\n```python\r\ntokenized = tokenizer(titles, texts)\r\n```" ]
completed
[]
wiki_dpr details for Open Domain Question Answering tasks
NONE
https://api.github.com/repos/huggingface/datasets/issues/5809/timeline
Hey guys! Thanks for creating the wiki_dpr dataset! I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr. As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr.
https://api.github.com/repos/huggingface/datasets
null
1,689,797,293
https://api.github.com/repos/huggingface/datasets/issues/5809/comments
I_kwDODunzps5kuEKt
null
5,809
https://api.github.com/repos/huggingface/datasets/issues/5809/events
false
closed
2023-04-28T18:34:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/5807
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5807/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5807/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/es94129", "id": 12763339, "login": "es94129", "node_id": "MDQ6VXNlcjEyNzYzMzM5", "organizations_url": "https://api.github.com/users/es94129/orgs", "received_events_url": "https://api.github.com/users/es94129/received_events", "repos_url": "https://api.github.com/users/es94129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "type": "User", "url": "https://api.github.com/users/es94129" }
https://github.com/huggingface/datasets/pull/5807
[]
false
2023-05-25T16:54:14Z
2023-05-25T16:54:14Z
null
[ "Hi @lhoestq or other maintainers, this is ready for review, could you please take a look?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5807). All of your documentation changes will be reflected on that endpoint.", "Per the discussion in #5798, will implement with `joblibspark` instead." ]
null
[]
Support parallelized downloading in load_dataset with Spark
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5807/timeline
As proposed in https://github.com/huggingface/datasets/issues/5798, this adds support to parallelized downloading in `load_dataset` with Spark, which can speed up the process by distributing the workload to worker nodes. Parallelizing dataset processing is not supported in this PR.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5807.diff", "html_url": "https://github.com/huggingface/datasets/pull/5807", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5807.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5807" }
1,688,977,237
https://api.github.com/repos/huggingface/datasets/issues/5807/comments
PR_kwDODunzps5PaKRE
null
5,807
https://api.github.com/repos/huggingface/datasets/issues/5807/events
true
open
2023-04-28T13:50:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/5806
{ "avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4", "events_url": "https://api.github.com/users/tsabbir96/events{/privacy}", "followers_url": "https://api.github.com/users/tsabbir96/followers", "following_url": "https://api.github.com/users/tsabbir96/following{/other_user}", "gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tsabbir96", "id": 49894149, "login": "tsabbir96", "node_id": "MDQ6VXNlcjQ5ODk0MTQ5", "organizations_url": "https://api.github.com/users/tsabbir96/orgs", "received_events_url": "https://api.github.com/users/tsabbir96/received_events", "repos_url": "https://api.github.com/users/tsabbir96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions", "type": "User", "url": "https://api.github.com/users/tsabbir96" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5806/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5806/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4", "events_url": "https://api.github.com/users/s-JoL/events{/privacy}", "followers_url": "https://api.github.com/users/s-JoL/followers", "following_url": "https://api.github.com/users/s-JoL/following{/other_user}", "gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/s-JoL", "id": 16948304, "login": "s-JoL", "node_id": "MDQ6VXNlcjE2OTQ4MzA0", "organizations_url": "https://api.github.com/users/s-JoL/orgs", "received_events_url": "https://api.github.com/users/s-JoL/received_events", "repos_url": "https://api.github.com/users/s-JoL/repos", "site_admin": false, "starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions", "type": "User", "url": "https://api.github.com/users/s-JoL" }
https://github.com/huggingface/datasets/issues/5806
[ { "avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4", "events_url": "https://api.github.com/users/tsabbir96/events{/privacy}", "followers_url": "https://api.github.com/users/tsabbir96/followers", "following_url": "https://api.github.com/users/tsabbir96/following{/other_user}", "gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tsabbir96", "id": 49894149, "login": "tsabbir96", "node_id": "MDQ6VXNlcjQ5ODk0MTQ5", "organizations_url": "https://api.github.com/users/tsabbir96/orgs", "received_events_url": "https://api.github.com/users/tsabbir96/received_events", "repos_url": "https://api.github.com/users/tsabbir96/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions", "type": "User", "url": "https://api.github.com/users/tsabbir96" } ]
false
2024-01-21T16:38:29Z
null
null
[ "Implementing this makes sense (e.g., `tensorflow_datasets`' imagefolder returns image filenames). Also, in Datasets 3.0, we plan only to store the bytes of an image/audio, not its path, so this feature would be useful when the path info is still needed.", "Hey @mariosasko, Can I work on this issue, this one seems interesting to implement. I have contributed to jupyterlab recently, and would love to contribute here as well. ", "@tsabbir96 if you are planning to start working on this, you can take on this issue by writing a comment with only the keyword: #self-assign", "#self-assign", "@albertvillanova thank you for letting me contribute here. \r\n@albertvillanova @mariosasko As I am totally new to this repo, could you tell me something more about this issue or perhaps give me some idea on how I can proceed with it? Thanks!", "Hello there, is this issue resolved? @tsabbir96 are you still working on it? Otherwise I would love to give it a try", "@EduardoPach This issue is still relevant, so feel free to work on it.", "Hey @mariosasko, I've taken the time to take a look at how we load the datasets usually. My main question now is about the final solution.\r\n\r\nSo the idea is that whenever we load the datasets we also add a new column in the _generate_tables() method from the builders called filename (or file_name) that should be related files contained in each split, right?\r\n\r\nDo you have any suggestions of where I could add that? ", "Is this issue still open? If yes, I'd like to work upon on it. Thanks", "> Is this issue still open? If yes, I'd like to work upon on it. Thanks\n\nDefinitely still open. I gave it a try, but then didn't get any feedback on my last question so I stopped . Feel free to work on it.", "It's still open, so feel free to work on it. This can be implemented by adding a param to the packaged builders' configs that inserts a column with file names (in `_generate_tables`) when `True`. Naming this column `file_name` sounds good to me.", "Hi is the issues still open, is see no activity since September but it shows that it is still assigned to tsabbir96. If \r\ntsabbir96 is not planning to continue, can i get it assigned to me." ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
Return the name of the currently loaded file in the load_dataset function.
NONE
https://api.github.com/repos/huggingface/datasets/issues/5806/timeline
### Feature request Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. ### Motivation When training large language models, machine problems may interrupt the training process. In such cases, it is common to load a previously saved checkpoint to resume training. I would like to be able to obtain the names of the previously trained data shards, so that I can skip these parts of the data during continued training to avoid overfitting and redundant training time. ### Your contribution I currently use a dataset in jsonl format, so I am primarily interested in the json format. I suggest adding the file name to the returned table here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92.
https://api.github.com/repos/huggingface/datasets
null
1,688,598,095
https://api.github.com/repos/huggingface/datasets/issues/5806/comments
I_kwDODunzps5kpfZP
null
5,806
https://api.github.com/repos/huggingface/datasets/issues/5806/events
false
open
2023-04-28T13:26:22Z
null
https://api.github.com/repos/huggingface/datasets/issues/5805
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5805/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5805/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://github.com/huggingface/datasets/issues/5805
[]
false
2023-06-23T14:58:44Z
null
null
[ "I can work on this. The link to the tutorial seems to be broken though @polinaeterna. ", "@isunitha98selvan would be great, thank you! which link are you talking about? I think it should work: https://huggingface.co/docs/datasets/create_dataset" ]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Improve `Create a dataset` tutorial
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5805/timeline
Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading. 1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide. 2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data). Maybe we should actually rethink and restructure this tutorial somehow.
https://api.github.com/repos/huggingface/datasets
null
1,688,558,577
https://api.github.com/repos/huggingface/datasets/issues/5805/comments
I_kwDODunzps5kpVvx
null
5,805
https://api.github.com/repos/huggingface/datasets/issues/5805/events
false
closed
2023-04-28T10:10:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/5804
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5804
[]
false
2023-04-28T10:18:51Z
2023-04-28T10:10:29Z
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5804). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006448 / 0.011353 (-0.004905) | 0.004440 / 0.011008 (-0.006568) | 0.097837 / 0.038508 (0.059328) | 0.027754 / 0.023109 (0.004645) | 0.306462 / 0.275898 (0.030564) | 0.332454 / 0.323480 (0.008975) | 0.004984 / 0.007986 (-0.003001) | 0.004703 / 0.004328 (0.000375) | 0.075213 / 0.004250 (0.070962) | 0.036524 / 0.037052 (-0.000529) | 0.310149 / 0.258489 (0.051659) | 0.346392 / 0.293841 (0.052552) | 0.031012 / 0.128546 (-0.097534) | 0.011598 / 0.075646 (-0.064049) | 0.323066 / 0.419271 (-0.096206) | 0.042945 / 0.043533 (-0.000588) | 0.302286 / 0.255139 (0.047147) | 0.327813 / 0.283200 (0.044614) | 0.092540 / 0.141683 (-0.049143) | 1.532893 / 1.452155 (0.080739) | 1.556676 / 1.492716 (0.063960) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195126 / 0.018006 (0.177120) | 0.399623 / 0.000490 (0.399133) | 0.003176 / 0.000200 (0.002976) | 0.000068 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023612 / 0.037411 (-0.013799) | 0.097794 / 0.014526 (0.083268) | 0.104665 / 0.176557 (-0.071891) | 0.167145 / 0.737135 (-0.569990) | 0.108769 / 0.296338 (-0.187570) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437818 / 0.215209 (0.222608) | 4.354896 / 2.077655 (2.277242) | 2.092832 / 1.504120 (0.588712) | 1.957630 / 1.541195 (0.416435) | 2.033135 / 1.468490 (0.564645) | 0.702316 / 4.584777 (-3.882461) | 3.448035 / 3.745712 (-0.297678) | 1.906762 / 5.269862 (-3.363100) | 1.253274 / 4.565676 (-3.312402) | 0.082486 / 0.424275 (-0.341789) | 0.012442 / 0.007607 (0.004835) | 0.532096 / 0.226044 (0.306052) | 5.366580 / 2.268929 (3.097652) | 2.441904 / 55.444624 (-53.002720) | 2.112116 / 6.876477 (-4.764361) | 2.185471 / 2.142072 (0.043398) | 0.797905 / 4.805227 (-4.007322) | 0.149811 / 6.500664 (-6.350853) | 0.066507 / 0.075469 (-0.008962) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206300 / 1.841788 (-0.635487) | 13.620851 / 8.074308 (5.546543) | 14.190666 / 10.191392 (3.999274) | 0.142343 / 0.680424 (-0.538081) | 0.016867 / 0.534201 (-0.517334) | 0.381557 / 0.579283 (-0.197726) | 0.373935 / 0.434364 (-0.060429) | 0.437856 / 0.540337 (-0.102481) | 0.525235 / 1.386936 (-0.861701) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006598 / 0.011353 (-0.004755) | 0.004487 / 0.011008 (-0.006522) | 0.077582 / 0.038508 (0.039073) | 0.028008 / 0.023109 (0.004899) | 0.341602 / 0.275898 (0.065704) | 0.377105 / 0.323480 (0.053625) | 0.004999 / 0.007986 (-0.002986) | 0.004791 / 0.004328 (0.000462) | 0.076418 / 0.004250 (0.072167) | 0.038347 / 0.037052 (0.001295) | 0.343196 / 0.258489 (0.084707) | 0.382459 / 0.293841 (0.088618) | 0.030597 / 0.128546 (-0.097950) | 0.011579 / 0.075646 (-0.064067) | 0.085876 / 0.419271 (-0.333396) | 0.043241 / 0.043533 (-0.000292) | 0.343754 / 0.255139 (0.088615) | 0.380689 / 0.283200 (0.097489) | 0.096015 / 0.141683 (-0.045668) | 1.464419 / 1.452155 (0.012264) | 1.574010 / 1.492716 (0.081294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.156433 / 0.018006 (0.138427) | 0.403179 / 0.000490 (0.402690) | 0.002415 / 0.000200 (0.002215) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024946 / 0.037411 (-0.012465) | 0.100568 / 0.014526 (0.086042) | 0.106440 / 0.176557 (-0.070117) | 0.158457 / 0.737135 (-0.578678) | 0.110774 / 0.296338 (-0.185564) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434734 / 0.215209 (0.219525) | 4.343874 / 2.077655 (2.266220) | 2.059759 / 1.504120 (0.555639) | 1.855124 / 1.541195 (0.313930) | 1.908567 / 1.468490 (0.440077) | 0.695283 / 4.584777 (-3.889494) | 3.347724 / 3.745712 (-0.397988) | 2.979498 / 5.269862 (-2.290364) | 1.532040 / 4.565676 (-3.033636) | 0.083021 / 0.424275 (-0.341254) | 0.012522 / 0.007607 (0.004915) | 0.540934 / 0.226044 (0.314890) | 5.385690 / 2.268929 (3.116762) | 2.507409 / 55.444624 (-52.937216) | 2.160537 / 6.876477 (-4.715939) | 2.269195 / 2.142072 (0.127123) | 0.804718 / 4.805227 (-4.000509) | 0.152432 / 6.500664 (-6.348232) | 0.068783 / 0.075469 (-0.006686) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.294698 / 1.841788 (-0.547090) | 14.152792 / 8.074308 (6.078484) | 14.233132 / 10.191392 (4.041740) | 0.143655 / 0.680424 (-0.536768) | 0.016844 / 0.534201 (-0.517357) | 0.380246 / 0.579283 (-0.199037) | 0.381730 / 0.434364 (-0.052633) | 0.456838 / 0.540337 (-0.083499) | 0.543677 / 1.386936 (-0.843259) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b28d5610887f2e107765f5f1557679184db08214 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008586 / 0.011353 (-0.002767) | 0.005886 / 0.011008 (-0.005122) | 0.114522 / 0.038508 (0.076014) | 0.040966 / 0.023109 (0.017857) | 0.366655 / 0.275898 (0.090757) | 0.408765 / 0.323480 (0.085285) | 0.006822 / 0.007986 (-0.001164) | 0.004508 / 0.004328 (0.000180) | 0.084715 / 0.004250 (0.080465) | 0.054007 / 0.037052 (0.016954) | 0.380500 / 0.258489 (0.122011) | 0.410377 / 0.293841 (0.116536) | 0.041040 / 0.128546 (-0.087507) | 0.013940 / 0.075646 (-0.061707) | 0.398456 / 0.419271 (-0.020816) | 0.059315 / 0.043533 (0.015782) | 0.353640 / 0.255139 (0.098501) | 0.388682 / 0.283200 (0.105482) | 0.121744 / 0.141683 (-0.019939) | 1.729306 / 1.452155 (0.277151) | 1.824768 / 1.492716 (0.332052) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228806 / 0.018006 (0.210800) | 0.492790 / 0.000490 (0.492300) | 0.010815 / 0.000200 (0.010615) | 0.000372 / 0.000054 (0.000318) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031750 / 0.037411 (-0.005662) | 0.127160 / 0.014526 (0.112635) | 0.136717 / 0.176557 (-0.039839) | 0.205590 / 0.737135 (-0.531545) | 0.142596 / 0.296338 (-0.153742) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.486419 / 0.215209 (0.271210) | 4.858572 / 2.077655 (2.780918) | 2.173867 / 1.504120 (0.669747) | 1.934619 / 1.541195 (0.393424) | 2.104185 / 1.468490 (0.635695) | 0.837913 / 4.584777 (-3.746864) | 4.552192 / 3.745712 (0.806480) | 2.565040 / 5.269862 (-2.704822) | 1.808499 / 4.565676 (-2.757178) | 0.103283 / 0.424275 (-0.320993) | 0.015040 / 0.007607 (0.007433) | 0.602325 / 0.226044 (0.376281) | 6.038655 / 2.268929 (3.769727) | 2.759789 / 55.444624 (-52.684835) | 2.330990 / 6.876477 (-4.545487) | 2.404111 / 2.142072 (0.262038) | 1.011637 / 4.805227 (-3.793590) | 0.202142 / 6.500664 (-6.298522) | 0.079496 / 0.075469 (0.004026) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.429543 / 1.841788 (-0.412245) | 18.052409 / 8.074308 (9.978101) | 16.989154 / 10.191392 (6.797762) | 0.208981 / 0.680424 (-0.471443) | 0.020490 / 0.534201 (-0.513711) | 0.502746 / 0.579283 (-0.076537) | 0.491769 / 0.434364 (0.057405) | 0.581970 / 0.540337 (0.041632) | 0.695816 / 1.386936 (-0.691120) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008449 / 0.011353 (-0.002904) | 0.006633 / 0.011008 (-0.004375) | 0.088638 / 0.038508 (0.050130) | 0.040013 / 0.023109 (0.016904) | 0.413108 / 0.275898 (0.137210) | 0.446310 / 0.323480 (0.122830) | 0.006515 / 0.007986 (-0.001471) | 0.006223 / 0.004328 (0.001894) | 0.089823 / 0.004250 (0.085573) | 0.052029 / 0.037052 (0.014977) | 0.407263 / 0.258489 (0.148774) | 0.449416 / 0.293841 (0.155576) | 0.041810 / 0.128546 (-0.086736) | 0.014604 / 0.075646 (-0.061042) | 0.103728 / 0.419271 (-0.315543) | 0.058212 / 0.043533 (0.014679) | 0.408936 / 0.255139 (0.153797) | 0.436727 / 0.283200 (0.153528) | 0.124344 / 0.141683 (-0.017339) | 1.752112 / 1.452155 (0.299957) | 1.859104 / 1.492716 (0.366387) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231172 / 0.018006 (0.213166) | 0.502974 / 0.000490 (0.502485) | 0.005586 / 0.000200 (0.005386) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034097 / 0.037411 (-0.003314) | 0.133780 / 0.014526 (0.119254) | 0.142321 / 0.176557 (-0.034236) | 0.199807 / 0.737135 (-0.537329) | 0.150073 / 0.296338 (-0.146266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.515658 / 0.215209 (0.300449) | 5.129783 / 2.077655 (3.052129) | 2.534767 / 1.504120 (1.030648) | 2.352468 / 1.541195 (0.811274) | 2.430708 / 1.468490 (0.962218) | 0.850087 / 4.584777 (-3.734690) | 4.529622 / 3.745712 (0.783910) | 2.451986 / 5.269862 (-2.817876) | 1.569568 / 4.565676 (-2.996109) | 0.102907 / 0.424275 (-0.321368) | 0.014420 / 0.007607 (0.006813) | 0.635124 / 0.226044 (0.409080) | 6.260496 / 2.268929 (3.991568) | 3.094984 / 55.444624 (-52.349640) | 2.780629 / 6.876477 (-4.095847) | 2.947620 / 2.142072 (0.805548) | 1.002397 / 4.805227 (-3.802830) | 0.200502 / 6.500664 (-6.300162) | 0.076577 / 0.075469 (0.001107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.505958 / 1.841788 (-0.335829) | 18.364986 / 8.074308 (10.290678) | 16.707214 / 10.191392 (6.515822) | 0.210976 / 0.680424 (-0.469447) | 0.022077 / 0.534201 (-0.512124) | 0.516174 / 0.579283 (-0.063109) | 0.502469 / 0.434364 (0.068105) | 0.626790 / 0.540337 (0.086453) | 0.747230 / 1.386936 (-0.639706) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc5fef5b6d91f009e4101684adcb374df2c170f6 \"CML watermark\")\n" ]
null
[]
Set dev version
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5804/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5804.diff", "html_url": "https://github.com/huggingface/datasets/pull/5804", "merged_at": "2023-04-28T10:10:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/5804.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5804" }
1,688,285,666
https://api.github.com/repos/huggingface/datasets/issues/5804/comments
PR_kwDODunzps5PX0Dk
null
5,804
https://api.github.com/repos/huggingface/datasets/issues/5804/events
true
closed
2023-04-28T09:52:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5803
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5803/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5803/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5803
[]
false
2023-04-28T10:18:56Z
2023-04-28T09:54:43Z
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5803). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008303 / 0.011353 (-0.003050) | 0.005681 / 0.011008 (-0.005327) | 0.111830 / 0.038508 (0.073322) | 0.039222 / 0.023109 (0.016112) | 0.336773 / 0.275898 (0.060875) | 0.376673 / 0.323480 (0.053193) | 0.006756 / 0.007986 (-0.001230) | 0.006078 / 0.004328 (0.001749) | 0.083552 / 0.004250 (0.079301) | 0.054430 / 0.037052 (0.017377) | 0.337310 / 0.258489 (0.078821) | 0.386138 / 0.293841 (0.092297) | 0.040068 / 0.128546 (-0.088478) | 0.013895 / 0.075646 (-0.061751) | 0.384174 / 0.419271 (-0.035097) | 0.058244 / 0.043533 (0.014711) | 0.342410 / 0.255139 (0.087271) | 0.362417 / 0.283200 (0.079217) | 0.123470 / 0.141683 (-0.018213) | 1.662938 / 1.452155 (0.210784) | 1.786488 / 1.492716 (0.293771) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232629 / 0.018006 (0.214622) | 0.478252 / 0.000490 (0.477762) | 0.008519 / 0.000200 (0.008319) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031222 / 0.037411 (-0.006190) | 0.125875 / 0.014526 (0.111350) | 0.138995 / 0.176557 (-0.037562) | 0.213073 / 0.737135 (-0.524062) | 0.141848 / 0.296338 (-0.154490) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463648 / 0.215209 (0.248439) | 4.582969 / 2.077655 (2.505314) | 2.104622 / 1.504120 (0.600502) | 1.887697 / 1.541195 (0.346502) | 1.946096 / 1.468490 (0.477606) | 0.809008 / 4.584777 (-3.775769) | 4.527871 / 3.745712 (0.782159) | 4.862721 / 5.269862 (-0.407141) | 2.423257 / 4.565676 (-2.142419) | 0.101080 / 0.424275 (-0.323196) | 0.014767 / 0.007607 (0.007160) | 0.574471 / 0.226044 (0.348427) | 5.746445 / 2.268929 (3.477516) | 2.682584 / 55.444624 (-52.762040) | 2.320113 / 6.876477 (-4.556364) | 2.474530 / 2.142072 (0.332458) | 0.992979 / 4.805227 (-3.812249) | 0.200812 / 6.500664 (-6.299852) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.395533 / 1.841788 (-0.446254) | 17.418803 / 8.074308 (9.344495) | 16.584875 / 10.191392 (6.393483) | 0.167739 / 0.680424 (-0.512685) | 0.020923 / 0.534201 (-0.513278) | 0.500788 / 0.579283 (-0.078496) | 0.510270 / 0.434364 (0.075906) | 0.589608 / 0.540337 (0.049270) | 0.694233 / 1.386936 (-0.692703) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008440 / 0.011353 (-0.002913) | 0.005871 / 0.011008 (-0.005137) | 0.085805 / 0.038508 (0.047297) | 0.039324 / 0.023109 (0.016215) | 0.400587 / 0.275898 (0.124689) | 0.431729 / 0.323480 (0.108249) | 0.006557 / 0.007986 (-0.001429) | 0.005778 / 0.004328 (0.001450) | 0.084394 / 0.004250 (0.080144) | 0.055274 / 0.037052 (0.018222) | 0.410568 / 0.258489 (0.152079) | 0.439952 / 0.293841 (0.146111) | 0.040335 / 0.128546 (-0.088211) | 0.013968 / 0.075646 (-0.061679) | 0.098765 / 0.419271 (-0.320507) | 0.055897 / 0.043533 (0.012364) | 0.387584 / 0.255139 (0.132445) | 0.412568 / 0.283200 (0.129368) | 0.120393 / 0.141683 (-0.021290) | 1.730996 / 1.452155 (0.278841) | 1.821538 / 1.492716 (0.328822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245688 / 0.018006 (0.227682) | 0.484888 / 0.000490 (0.484398) | 0.000485 / 0.000200 (0.000285) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032340 / 0.037411 (-0.005072) | 0.130819 / 0.014526 (0.116293) | 0.138491 / 0.176557 (-0.038065) | 0.196902 / 0.737135 (-0.540233) | 0.145404 / 0.296338 (-0.150935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.487643 / 0.215209 (0.272434) | 4.818956 / 2.077655 (2.741301) | 2.332316 / 1.504120 (0.828196) | 2.102018 / 1.541195 (0.560823) | 2.156743 / 1.468490 (0.688253) | 0.803365 / 4.584777 (-3.781412) | 4.308561 / 3.745712 (0.562849) | 2.373331 / 5.269862 (-2.896530) | 1.539474 / 4.565676 (-3.026202) | 0.099081 / 0.424275 (-0.325194) | 0.014627 / 0.007607 (0.007020) | 0.609883 / 0.226044 (0.383838) | 6.092402 / 2.268929 (3.823474) | 2.858137 / 55.444624 (-52.586488) | 2.463256 / 6.876477 (-4.413220) | 2.637048 / 2.142072 (0.494976) | 0.959552 / 4.805227 (-3.845676) | 0.194170 / 6.500664 (-6.306495) | 0.075231 / 0.075469 (-0.000238) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.516502 / 1.841788 (-0.325285) | 18.077893 / 8.074308 (10.003585) | 16.507961 / 10.191392 (6.316569) | 0.171643 / 0.680424 (-0.508780) | 0.020378 / 0.534201 (-0.513823) | 0.491508 / 0.579283 (-0.087775) | 0.492136 / 0.434364 (0.057772) | 0.602258 / 0.540337 (0.061920) | 0.719882 / 1.386936 (-0.667054) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#330ac3e95fd3f2d61bac31b5b9c24399a5b54723 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006572 / 0.011353 (-0.004781) | 0.004647 / 0.011008 (-0.006362) | 0.098277 / 0.038508 (0.059769) | 0.027937 / 0.023109 (0.004828) | 0.339833 / 0.275898 (0.063935) | 0.398305 / 0.323480 (0.074825) | 0.005093 / 0.007986 (-0.002893) | 0.003374 / 0.004328 (-0.000954) | 0.075287 / 0.004250 (0.071037) | 0.037355 / 0.037052 (0.000303) | 0.339779 / 0.258489 (0.081290) | 0.403756 / 0.293841 (0.109915) | 0.030705 / 0.128546 (-0.097841) | 0.011596 / 0.075646 (-0.064050) | 0.323809 / 0.419271 (-0.095463) | 0.043357 / 0.043533 (-0.000176) | 0.342817 / 0.255139 (0.087678) | 0.386330 / 0.283200 (0.103130) | 0.088229 / 0.141683 (-0.053454) | 1.466017 / 1.452155 (0.013862) | 1.566551 / 1.492716 (0.073835) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196276 / 0.018006 (0.178269) | 0.420321 / 0.000490 (0.419831) | 0.002234 / 0.000200 (0.002034) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023999 / 0.037411 (-0.013412) | 0.095117 / 0.014526 (0.080592) | 0.102544 / 0.176557 (-0.074013) | 0.164796 / 0.737135 (-0.572340) | 0.107030 / 0.296338 (-0.189309) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429299 / 0.215209 (0.214089) | 4.272503 / 2.077655 (2.194849) | 2.101890 / 1.504120 (0.597771) | 1.978907 / 1.541195 (0.437713) | 2.008993 / 1.468490 (0.540503) | 0.695171 / 4.584777 (-3.889606) | 3.427050 / 3.745712 (-0.318662) | 1.892945 / 5.269862 (-3.376917) | 1.247156 / 4.565676 (-3.318521) | 0.082576 / 0.424275 (-0.341699) | 0.012526 / 0.007607 (0.004918) | 0.526338 / 0.226044 (0.300293) | 5.313855 / 2.268929 (3.044927) | 2.421134 / 55.444624 (-53.023490) | 2.072026 / 6.876477 (-4.804451) | 2.159846 / 2.142072 (0.017773) | 0.800753 / 4.805227 (-4.004474) | 0.150507 / 6.500664 (-6.350157) | 0.066378 / 0.075469 (-0.009091) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218709 / 1.841788 (-0.623079) | 13.649239 / 8.074308 (5.574931) | 13.952762 / 10.191392 (3.761370) | 0.141967 / 0.680424 (-0.538457) | 0.016443 / 0.534201 (-0.517758) | 0.380408 / 0.579283 (-0.198875) | 0.377693 / 0.434364 (-0.056671) | 0.439819 / 0.540337 (-0.100518) | 0.529667 / 1.386936 (-0.857269) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004630) | 0.004495 / 0.011008 (-0.006513) | 0.075459 / 0.038508 (0.036951) | 0.028135 / 0.023109 (0.005026) | 0.349904 / 0.275898 (0.074006) | 0.390620 / 0.323480 (0.067140) | 0.005175 / 0.007986 (-0.002810) | 0.004720 / 0.004328 (0.000392) | 0.074243 / 0.004250 (0.069993) | 0.039084 / 0.037052 (0.002032) | 0.352486 / 0.258489 (0.093997) | 0.397549 / 0.293841 (0.103708) | 0.030596 / 0.128546 (-0.097950) | 0.011627 / 0.075646 (-0.064020) | 0.083394 / 0.419271 (-0.335878) | 0.042155 / 0.043533 (-0.001378) | 0.345668 / 0.255139 (0.090529) | 0.383474 / 0.283200 (0.100275) | 0.096530 / 0.141683 (-0.045153) | 1.493360 / 1.452155 (0.041206) | 1.572259 / 1.492716 (0.079543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.162605 / 0.018006 (0.144599) | 0.409513 / 0.000490 (0.409023) | 0.002029 / 0.000200 (0.001829) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025824 / 0.037411 (-0.011588) | 0.102439 / 0.014526 (0.087913) | 0.109515 / 0.176557 (-0.067041) | 0.160650 / 0.737135 (-0.576486) | 0.112971 / 0.296338 (-0.183367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433293 / 0.215209 (0.218084) | 4.340286 / 2.077655 (2.262631) | 2.055857 / 1.504120 (0.551737) | 1.854451 / 1.541195 (0.313256) | 1.912752 / 1.468490 (0.444261) | 0.700076 / 4.584777 (-3.884701) | 3.361542 / 3.745712 (-0.384170) | 2.760204 / 5.269862 (-2.509658) | 1.477395 / 4.565676 (-3.088282) | 0.082868 / 0.424275 (-0.341407) | 0.012479 / 0.007607 (0.004872) | 0.532749 / 0.226044 (0.306704) | 5.323701 / 2.268929 (3.054772) | 2.509524 / 55.444624 (-52.935100) | 2.168668 / 6.876477 (-4.707809) | 2.259112 / 2.142072 (0.117040) | 0.806686 / 4.805227 (-3.998542) | 0.154620 / 6.500664 (-6.346044) | 0.068348 / 0.075469 (-0.007121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316512 / 1.841788 (-0.525276) | 14.158143 / 8.074308 (6.083835) | 14.110643 / 10.191392 (3.919251) | 0.143760 / 0.680424 (-0.536664) | 0.016851 / 0.534201 (-0.517350) | 0.376594 / 0.579283 (-0.202689) | 0.386957 / 0.434364 (-0.047407) | 0.466185 / 0.540337 (-0.074152) | 0.550269 / 1.386936 (-0.836667) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009457 / 0.011353 (-0.001896) | 0.006453 / 0.011008 (-0.004555) | 0.136392 / 0.038508 (0.097884) | 0.038378 / 0.023109 (0.015269) | 0.413171 / 0.275898 (0.137273) | 0.451605 / 0.323480 (0.128126) | 0.007123 / 0.007986 (-0.000863) | 0.006316 / 0.004328 (0.001987) | 0.103009 / 0.004250 (0.098758) | 0.049182 / 0.037052 (0.012130) | 0.398635 / 0.258489 (0.140146) | 0.463146 / 0.293841 (0.169305) | 0.056247 / 0.128546 (-0.072299) | 0.019589 / 0.075646 (-0.056058) | 0.475882 / 0.419271 (0.056610) | 0.094918 / 0.043533 (0.051385) | 0.416502 / 0.255139 (0.161363) | 0.447129 / 0.283200 (0.163929) | 0.133314 / 0.141683 (-0.008369) | 2.132888 / 1.452155 (0.680733) | 2.073383 / 1.492716 (0.580667) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273037 / 0.018006 (0.255030) | 0.625675 / 0.000490 (0.625185) | 0.003449 / 0.000200 (0.003249) | 0.000185 / 0.000054 (0.000130) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031889 / 0.037411 (-0.005523) | 0.131673 / 0.014526 (0.117148) | 0.141575 / 0.176557 (-0.034982) | 0.214978 / 0.737135 (-0.522158) | 0.145586 / 0.296338 (-0.150752) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.711135 / 0.215209 (0.495926) | 7.162492 / 2.077655 (5.084837) | 2.906028 / 1.504120 (1.401908) | 2.488855 / 1.541195 (0.947660) | 2.574628 / 1.468490 (1.106138) | 1.587824 / 4.584777 (-2.996953) | 6.332962 / 3.745712 (2.587250) | 5.419578 / 5.269862 (0.149717) | 2.935413 / 4.565676 (-1.630263) | 0.169159 / 0.424275 (-0.255116) | 0.015358 / 0.007607 (0.007751) | 0.862036 / 0.226044 (0.635992) | 8.559256 / 2.268929 (6.290328) | 3.530756 / 55.444624 (-51.913868) | 2.626288 / 6.876477 (-4.250188) | 2.770063 / 2.142072 (0.627990) | 1.500116 / 4.805227 (-3.305112) | 0.265109 / 6.500664 (-6.235555) | 0.084944 / 0.075469 (0.009475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631060 / 1.841788 (-0.210728) | 19.022827 / 8.074308 (10.948519) | 22.973632 / 10.191392 (12.782240) | 0.296265 / 0.680424 (-0.384158) | 0.032317 / 0.534201 (-0.501884) | 0.624171 / 0.579283 (0.044888) | 0.690643 / 0.434364 (0.256279) | 0.691206 / 0.540337 (0.150869) | 0.758855 / 1.386936 (-0.628081) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009441 / 0.011353 (-0.001912) | 0.006270 / 0.011008 (-0.004739) | 0.110284 / 0.038508 (0.071776) | 0.035952 / 0.023109 (0.012842) | 0.521894 / 0.275898 (0.245996) | 0.582624 / 0.323480 (0.259144) | 0.011400 / 0.007986 (0.003414) | 0.004677 / 0.004328 (0.000348) | 0.115721 / 0.004250 (0.111470) | 0.048521 / 0.037052 (0.011469) | 0.497142 / 0.258489 (0.238653) | 0.573733 / 0.293841 (0.279892) | 0.055788 / 0.128546 (-0.072759) | 0.020949 / 0.075646 (-0.054697) | 0.132968 / 0.419271 (-0.286303) | 0.063045 / 0.043533 (0.019512) | 0.537769 / 0.255139 (0.282630) | 0.527560 / 0.283200 (0.244361) | 0.123756 / 0.141683 (-0.017927) | 1.994111 / 1.452155 (0.541956) | 2.104623 / 1.492716 (0.611907) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279057 / 0.018006 (0.261051) | 0.537342 / 0.000490 (0.536852) | 0.007782 / 0.000200 (0.007582) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032018 / 0.037411 (-0.005394) | 0.133456 / 0.014526 (0.118930) | 0.142039 / 0.176557 (-0.034517) | 0.213769 / 0.737135 (-0.523366) | 0.143811 / 0.296338 (-0.152527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.680142 / 0.215209 (0.464933) | 6.450439 / 2.077655 (4.372784) | 2.820724 / 1.504120 (1.316604) | 2.520407 / 1.541195 (0.979212) | 2.568972 / 1.468490 (1.100482) | 1.250584 / 4.584777 (-3.334193) | 6.108222 / 3.745712 (2.362509) | 3.065965 / 5.269862 (-2.203897) | 2.108675 / 4.565676 (-2.457002) | 0.167870 / 0.424275 (-0.256405) | 0.015127 / 0.007607 (0.007520) | 0.849645 / 0.226044 (0.623600) | 8.508727 / 2.268929 (6.239799) | 3.707897 / 55.444624 (-51.736727) | 3.009279 / 6.876477 (-3.867198) | 3.067179 / 2.142072 (0.925106) | 1.516370 / 4.805227 (-3.288858) | 0.264845 / 6.500664 (-6.235819) | 0.095137 / 0.075469 (0.019668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.826306 / 1.841788 (-0.015481) | 20.119641 / 8.074308 (12.045333) | 21.532158 / 10.191392 (11.340766) | 0.278631 / 0.680424 (-0.401793) | 0.029494 / 0.534201 (-0.504707) | 0.621887 / 0.579283 (0.042604) | 0.686864 / 0.434364 (0.252500) | 0.695412 / 0.540337 (0.155074) | 0.864829 / 1.386936 (-0.522108) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8e1af7b30c94ce77abd9de732f19198e197d900c \"CML watermark\")\n" ]
null
[]
Release: 2.12.0
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5803/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5803.diff", "html_url": "https://github.com/huggingface/datasets/pull/5803", "merged_at": "2023-04-28T09:54:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/5803.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5803" }
1,688,256,290
https://api.github.com/repos/huggingface/datasets/issues/5803/comments
PR_kwDODunzps5PXtte
null
5,803
https://api.github.com/repos/huggingface/datasets/issues/5803/events
true
closed
2023-04-27T09:51:36Z
null
https://api.github.com/repos/huggingface/datasets/issues/5802
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5802
[]
false
2023-04-27T14:59:47Z
2023-04-27T14:51:40Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007818 / 0.011353 (-0.003535) | 0.005456 / 0.011008 (-0.005552) | 0.114685 / 0.038508 (0.076177) | 0.038398 / 0.023109 (0.015289) | 0.351289 / 0.275898 (0.075391) | 0.389170 / 0.323480 (0.065690) | 0.006213 / 0.007986 (-0.001773) | 0.005796 / 0.004328 (0.001467) | 0.085315 / 0.004250 (0.081065) | 0.049251 / 0.037052 (0.012198) | 0.368119 / 0.258489 (0.109630) | 0.394725 / 0.293841 (0.100884) | 0.040390 / 0.128546 (-0.088157) | 0.014076 / 0.075646 (-0.061570) | 0.393771 / 0.419271 (-0.025500) | 0.058929 / 0.043533 (0.015397) | 0.349526 / 0.255139 (0.094387) | 0.378409 / 0.283200 (0.095210) | 0.114354 / 0.141683 (-0.027329) | 1.749244 / 1.452155 (0.297089) | 1.847946 / 1.492716 (0.355229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241648 / 0.018006 (0.223641) | 0.468419 / 0.000490 (0.467929) | 0.004311 / 0.000200 (0.004111) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029978 / 0.037411 (-0.007433) | 0.121832 / 0.014526 (0.107306) | 0.133516 / 0.176557 (-0.043041) | 0.199174 / 0.737135 (-0.537961) | 0.138181 / 0.296338 (-0.158158) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478346 / 0.215209 (0.263137) | 4.723967 / 2.077655 (2.646312) | 2.107724 / 1.504120 (0.603604) | 1.874810 / 1.541195 (0.333615) | 1.911568 / 1.468490 (0.443078) | 0.800966 / 4.584777 (-3.783811) | 4.399032 / 3.745712 (0.653320) | 2.346160 / 5.269862 (-2.923702) | 1.506673 / 4.565676 (-3.059004) | 0.099119 / 0.424275 (-0.325156) | 0.014055 / 0.007607 (0.006448) | 0.582419 / 0.226044 (0.356375) | 5.789147 / 2.268929 (3.520218) | 2.632443 / 55.444624 (-52.812182) | 2.217630 / 6.876477 (-4.658846) | 2.337709 / 2.142072 (0.195637) | 0.995345 / 4.805227 (-3.809882) | 0.200040 / 6.500664 (-6.300624) | 0.076855 / 0.075469 (0.001386) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.386104 / 1.841788 (-0.455683) | 17.109772 / 8.074308 (9.035464) | 16.147612 / 10.191392 (5.956220) | 0.162846 / 0.680424 (-0.517577) | 0.020692 / 0.534201 (-0.513509) | 0.495752 / 0.579283 (-0.083531) | 0.475715 / 0.434364 (0.041351) | 0.619826 / 0.540337 (0.079488) | 0.720745 / 1.386936 (-0.666191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008255 / 0.011353 (-0.003098) | 0.006118 / 0.011008 (-0.004890) | 0.088004 / 0.038508 (0.049496) | 0.039225 / 0.023109 (0.016116) | 0.399290 / 0.275898 (0.123392) | 0.432272 / 0.323480 (0.108792) | 0.007382 / 0.007986 (-0.000603) | 0.004576 / 0.004328 (0.000248) | 0.086511 / 0.004250 (0.082260) | 0.050472 / 0.037052 (0.013420) | 0.404160 / 0.258489 (0.145671) | 0.445356 / 0.293841 (0.151515) | 0.041549 / 0.128546 (-0.086997) | 0.014148 / 0.075646 (-0.061498) | 0.101697 / 0.419271 (-0.317574) | 0.057474 / 0.043533 (0.013941) | 0.395093 / 0.255139 (0.139954) | 0.418613 / 0.283200 (0.135414) | 0.123217 / 0.141683 (-0.018466) | 1.726146 / 1.452155 (0.273991) | 1.852746 / 1.492716 (0.360029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256876 / 0.018006 (0.238870) | 0.476336 / 0.000490 (0.475846) | 0.000465 / 0.000200 (0.000265) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034304 / 0.037411 (-0.003107) | 0.132617 / 0.014526 (0.118091) | 0.141712 / 0.176557 (-0.034845) | 0.198101 / 0.737135 (-0.539034) | 0.150877 / 0.296338 (-0.145461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.504717 / 0.215209 (0.289508) | 5.035060 / 2.077655 (2.957405) | 2.494812 / 1.504120 (0.990692) | 2.306601 / 1.541195 (0.765406) | 2.481860 / 1.468490 (1.013370) | 0.826041 / 4.584777 (-3.758736) | 4.414748 / 3.745712 (0.669036) | 2.417899 / 5.269862 (-2.851963) | 1.574548 / 4.565676 (-2.991128) | 0.101712 / 0.424275 (-0.322563) | 0.014388 / 0.007607 (0.006781) | 0.616674 / 0.226044 (0.390630) | 6.180382 / 2.268929 (3.911453) | 2.969110 / 55.444624 (-52.475514) | 2.574383 / 6.876477 (-4.302094) | 2.711008 / 2.142072 (0.568935) | 0.997679 / 4.805227 (-3.807548) | 0.201241 / 6.500664 (-6.299423) | 0.076132 / 0.075469 (0.000663) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.542704 / 1.841788 (-0.299084) | 17.610700 / 8.074308 (9.536392) | 16.152973 / 10.191392 (5.961581) | 0.166040 / 0.680424 (-0.514384) | 0.020286 / 0.534201 (-0.513915) | 0.506724 / 0.579283 (-0.072559) | 0.484348 / 0.434364 (0.049984) | 0.606524 / 0.540337 (0.066187) | 0.734997 / 1.386936 (-0.651939) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#a200ec9126a0879f3d38d4e9e3787633a23af42e \"CML watermark\")\n" ]
null
[]
Validate non-empty data_files
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5802/timeline
This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default). See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5802.diff", "html_url": "https://github.com/huggingface/datasets/pull/5802", "merged_at": "2023-04-27T14:51:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5802.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5802" }
1,686,509,799
https://api.github.com/repos/huggingface/datasets/issues/5802/comments
PR_kwDODunzps5PR199
null
5,802
https://api.github.com/repos/huggingface/datasets/issues/5802/events
true
closed
2023-04-27T08:13:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/5800
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5800
[]
false
2023-04-27T09:33:05Z
2023-04-27T09:30:16Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
null
[]
Change downloaded file permission based on umask
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5800/timeline
This PR changes the permission of downloaded files to cache, so that the umask is taken into account. Related to: - #2157 Fix #5799. CC: @stas00
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5800.diff", "html_url": "https://github.com/huggingface/datasets/pull/5800", "merged_at": "2023-04-27T09:30:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/5800.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5800" }
1,686,348,096
https://api.github.com/repos/huggingface/datasets/issues/5800/comments
PR_kwDODunzps5PRTRh
null
5,800
https://api.github.com/repos/huggingface/datasets/issues/5800/events
true
closed
2023-04-27T08:06:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/5799
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5799
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-27T09:30:17Z
2023-04-27T09:30:17Z
null
[]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Files downloaded to cache do not respect umask
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5799/timeline
As reported by @stas00, files downloaded to the cache do not respect umask: ```bash $ ls -l /path/to/cache/datasets/downloads/ -rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6 ``` Related to: - #2065
https://api.github.com/repos/huggingface/datasets
null
1,686,334,572
https://api.github.com/repos/huggingface/datasets/issues/5799/comments
I_kwDODunzps5kg2xs
null
5,799
https://api.github.com/repos/huggingface/datasets/issues/5799/events
false
open
2023-04-27T00:16:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/5798
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/es94129", "id": 12763339, "login": "es94129", "node_id": "MDQ6VXNlcjEyNzYzMzM5", "organizations_url": "https://api.github.com/users/es94129/orgs", "received_events_url": "https://api.github.com/users/es94129/received_events", "repos_url": "https://api.github.com/users/es94129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "type": "User", "url": "https://api.github.com/users/es94129" }
https://github.com/huggingface/datasets/issues/5798
[]
false
2023-05-25T14:11:41Z
null
null
[ "Hi ! We're using process pools for parallelism right now. I was wondering if there's a package that implements the same API as a process pool but runs with Spark under the hood ? That or something similar would be cool because users could use whatever distributed framework they want this way.\r\n\r\nFeel free to ping us when you'd like to open PRs for this kind of things, so that we can discuss this before you start working on it ^^", "Hi, thanks for taking a look and providing your input! I don't know of such packages, and even it exists, I don't think with the process pool API it's possible to run Spark as backend properly; otherwise I understand a unified API would be preferable.\r\n\r\nThe process pool API requires splitting the workload to a fixed number parts for multiprocessing; meanwhile distributed framework such as Spark has sophisticated scheduler to distribute the workload to the processes on multiple machines in a cluster, so the way of splitting things for `multiprocessing.pool` would not suit / be as flexible as directly calling the `sparkContext.parallelize` API.\r\n\r\nI think this could be a good addition to scale the `datasets` implementation to distributed workers, and from my benchmark results so far it looks promising compared with multiprocessing.", "I see ! I think we only need an equivalent of `pool.map`. We use it to run download and conversion of data files on disk. That would require less changes in the internal code - and therefore less tests to write ;)\r\n\r\nWe also use `pool.apply_async` in some places with a `Queue` to get progress updates of the running jobs. I'm mentioning this in case there's a way to get a python generator from a running spark job ? This is less important though", "For Spark, `rdd.map` (where `rdd` can be created by `sparkContext.parallelize`) is the most similar as `pool.map`, but it requires creating a Spark RDD first that is used for distributing the `iterable` and the actual parallelization is managed by the Spark framework; `pool.map` takes the splits of `iterable` that are split into `num_proc` parts by the Python code. You can also check my PR #5807 in the `src/datasets/utils/py_utils.py` file to compare the differences of the APIs, it might make more sense than the the above description.\r\n\r\nGiven the different inputs and mechanisms of calling the `map` functions, this is why I think it's not that feasible to reuse most of the `multiprocessing` code.\r\n\r\nProgress bar updating might be challenging with Spark, I'll consider it as a followup work.", "Indeed I think the current use of multiprocessing.Pool in `map_nested` can be rewritten to work like `sparkContext.parallelize` - without splitting the iterable.\r\n\r\nMaybe from the user's perspective it's ok to let multiprocessing.Pool or spark distribute the load on their own, as long as it takes a list and runs jobs in parallel in the end :)\r\n", "From your feedback, seems to me there are two paths to consider now for supporting spark's `map` function in `map_nested` now:\r\n1. Keep the current `pool.map` implementation, and add an if statement for the spark's `map` code (which is what I did in my current PR) -- the code change is just a few lines in the `map_nested` function, and it has been tested by unit tests + manual testing on real Spark clusters; if you have other concerns I'd also be happy to address them.\r\n2. Rewrite the current `pool.map` implementation to remove splitting the iterable, and we will still need to add an if statement to use either\r\n```python\r\nwith Pool(...) as pool:\r\n mapped = pool.map(_single_map_nested, iterable)\r\n```\r\nor\r\n```python\r\nrdd = spark.sparkContext.parallelize(iterable)\r\nmapped = rdd.map(lambda obj: _single_map_nested((function, obj, types, None, True, None))).collect()\r\n```\r\nbecause there is no unified API that supports both `pool.map` and `rdd.map`. This can be more unified and flexible in the long run, but might require more work, and it will change the existing multiprocessing behavior, which is why I'm not leaning towards this option.\r\n\r\nAm I understanding correctly?", "Yup correct ! I think it's a nice path because it would be possible for users to define whatever parallel processing backend they want. I think we still need to discuss how that would look like in the `datasets` API : how to specify it has to use the \"spark\" parallel backend ? And how to specify the spark session parameters (number of executors etc.) ? Maybe there is something more practical than `use_spark=True`\r\n\r\nI'll check with the team internally if they have some ideas, but feel free to share your thoughts here !", "Sure, please let me know if you have more updates regarding the API and implementation from the team.\r\n\r\nFor parameters we don't need to worry about setting them for Spark, because Spark will figure out the environment / number of worker nodes by itself, so it's preferable to just provide some parameter such as `use_spark` to use the RDD `map` function.", "Hi! I wanted to check in to see if there is any update from the team.\r\n\r\nA potential change of API I can think of is change the argument to `distributed_backend=...`, which accepts `str`, such as `load_dataset(..., distributed_backend=\"spark\")`.\r\n\r\nImplementation wise, we can add a class / function to abstract away the details of using multiprocessing vs. spark vs. other parallel processing frameworks in `map_nested` and `_prepare_split`.", "I found this quite interesting: https://github.com/joblib/joblib-spark with this syntax:\r\n\r\n```python\r\nwith parallel_backend('spark', n_jobs=3):\r\n ...\r\n```\r\n\r\ncc @lu-wang-dl who might know better", "Joblib spark is providing Spark backend for joblib. We can implement a general parallel backend like\r\n```\r\nwith parallel_backend(\"<parallel-backedn>\", n_jobs=..):\r\n```\r\n\r\nIt can support multiprocessing , spark, ray, and etc. https://joblib.readthedocs.io/en/latest/parallel.html#joblib.parallel_backend", "Thank you @lhoestq for finding this repo. I validated that it can distribute downloading jobs with Spark to arbitrary cluster worker nodes evenly with `n_jobs=-1`.\r\n\r\nFor the API, I think it makes sense to define it as\r\n```python\r\nload_dataset(..., parallel_backend=<str>)\r\n```\r\nwhere `parallel_backend` can be `spark`, `multiprocessing`, and potentially other supported joblib backends including `ray` and `dask`.\r\n\r\nImplementation-wise, do you think it is better to just use `joblib` for `spark` backend in `map_nested`, or also migrate the `multiprocessing.Pool` code to use `joblib`?", "Hello @lhoestq, I wanted to follow up on my previous comment with some prototyping code that demonstrates how `map_nested` would be like if we unify `multiprocessing` and `spark` with `joblib`. The snippet hasn't hashed out the details such as dealing with `tqdm` yet.\r\n\r\nIn terms of API, the way of using multiprocessing is still the same; for Spark, the user sets `parallel_backend='spark'` can reuse the `num_proc` argument to pass in the number of executors, or preferably, just set `num_proc=-1` and joblib is able to decide it (I've validated it by running it on a Spark cluster).\r\n\r\n```python\r\ndef map_nested(\r\n # ... same args\r\n parallel_backend: Optional[str] = None, # proposed new argument\r\n):\r\n\r\n # ... same code\r\n\r\n # allow user to specify num_proc=-1, so that joblib will optimize it\r\n if (num_proc <= 1 and num_proc != -1) or len(iterable) < parallel_min_length:\r\n # same code\r\n mapped = [\r\n _single_map_nested((function, obj, types, None, True, None))\r\n for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n ]\r\n else:\r\n if not parallel_backend:\r\n parallel_backend = 'loky' # 'loky' is joblib's own implementation of robust multiprocessing\r\n \r\n n_jobs = min(num_proc, len(iterable))\r\n\r\n if parallel_backend == 'spark':\r\n n_jobs = -1 # 'loky' is joblib's own implementation of robust multiprocessing\r\n from joblibspark import register_spark\r\n register_spark()\r\n\r\n # parallelized with the same API\r\n with joblib.parallel_backend(parallel_backend, n_jobs=n_jobs):\r\n mapped = joblib.Parallel()(\r\n joblib.delayed(\r\n _single_map_nested((function, obj, types, None, True, None))\r\n )(obj) for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n )\r\n \r\n # ... same code\r\n```\r\nWe can always `joblib` for Spark and other distributed backends such as Ray if people want to support them later. It's worth noting that some distributed backends do not currently have `joblib` implementations.\r\n\r\nI would appreciate your thoughts on this proposed new API. We can also discuss the pros and cons of migrating the `multiprocessing` code to `joblib` later.", "Nice ! It should be quite easy to make the change then :)\r\n\r\nI think adding spark support can actually be less than 20 lines of code and would roughly require one line of code to change in map_nested:\r\n\r\nMaybe we can define a new `datasets.parallel` submodule that has the `parallel_backend()` context manager and a `parallel_map()` function that uses `Pool.map` by default and `joblib` otherwise.\r\n\r\n`joblib` would be an optional dependency, and `joblib-spark` as well.\r\n\r\nThen whenever someone wants to use Spark, they can do something like this (similar to scikit-learn parallel_backend):\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\"):\r\n ds = load_dataset(...)\r\n```\r\n\r\nWhat do you think ?", "Although until we've switched to all the steps in `load_dataset` to use `datasets.parallel`, I would require the user to explicitly say which step should use Spark. Maybe something like this, but I'm not sure yet:\r\n\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\"]):\r\n ds = load_dataset(...)\r\n```\r\nfor now some steps can be NotImplemented:\r\n```python\r\nfrom datasets.parallel import parallel_backend\r\n\r\nwith parallel_backend(\"spark\", steps=[\"download\", \"prepare\"]):\r\n# NotImplementedError: the \"prepare\" step that converts the raw data files to Arrow is not compatible with the \"spark\" backend yet\r\n```\r\n\r\nThis way we can progressively roll out Spark support for the other data loading/processing steps without breaking changes between `datasets` versions", "Sounds good! I like the partial rollout idea.\r\nSo for example `map_nested` would call `parallel_map` under the hood if `num_proc != 1` or `parallel_backend` is specified right?\r\nI would be happy to start a PR next week to explore this path.", "Awesome ! I think map_nested can call `parallel_map()` if num_proc > 1, and `parallel_map` can be responsible to use Pool.map by default or joblib." ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Support parallelized downloading and processing in load_dataset with Spark
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5798/timeline
### Feature request When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes. ```python load_dataset(..., use_spark=True) ``` ### Motivation Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes. ### Your contribution I can submit a PR to support this.
https://api.github.com/repos/huggingface/datasets
null
1,685,904,526
https://api.github.com/repos/huggingface/datasets/issues/5798/comments
I_kwDODunzps5kfNyO
null
5,798
https://api.github.com/repos/huggingface/datasets/issues/5798/events
false
open
2023-04-26T18:19:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/5797
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/34729065?v=4", "events_url": "https://api.github.com/users/haonan-li/events{/privacy}", "followers_url": "https://api.github.com/users/haonan-li/followers", "following_url": "https://api.github.com/users/haonan-li/following{/other_user}", "gists_url": "https://api.github.com/users/haonan-li/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/haonan-li", "id": 34729065, "login": "haonan-li", "node_id": "MDQ6VXNlcjM0NzI5MDY1", "organizations_url": "https://api.github.com/users/haonan-li/orgs", "received_events_url": "https://api.github.com/users/haonan-li/received_events", "repos_url": "https://api.github.com/users/haonan-li/repos", "site_admin": false, "starred_url": "https://api.github.com/users/haonan-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haonan-li/subscriptions", "type": "User", "url": "https://api.github.com/users/haonan-li" }
https://github.com/huggingface/datasets/issues/5797
[]
false
2023-04-27T11:56:58Z
null
null
[ "Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.", "I think `load_dataset(\"mbzuai/bactrian-x\")` shouldn't be loaded at all and raise an error but because of [this fallback](https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L1194) to packaged loaders when no other options are applicable, it loads the dataset with standard `json` loader instead of the custom loading script." ]
null
[]
load_dataset is case sentitive?
NONE
https://api.github.com/repos/huggingface/datasets/issues/5797/timeline
### Describe the bug load_dataset() function is case sensitive? ### Steps to reproduce the bug The following two code, get totally different behavior. 1. load_dataset('mbzuai/bactrian-x','en') 2. load_dataset('MBZUAI/Bactrian-X','en') ### Expected behavior Compare 1 and 2. 1 will download all 52 subsets, shell output: ```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx``` 2 will only download single subset, shell output ```Downloading and preparing dataset bactrian-x/en to xxx``` ### Environment info Python 3.10.11 datasets Version: 2.11.0
https://api.github.com/repos/huggingface/datasets
null
1,685,501,199
https://api.github.com/repos/huggingface/datasets/issues/5797/comments
I_kwDODunzps5kdrUP
null
5,797
https://api.github.com/repos/huggingface/datasets/issues/5797/events
false
closed
2023-04-26T17:39:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/5796
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5796
[]
false
2023-04-27T16:41:50Z
2023-04-27T16:34:45Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010480 / 0.011353 (-0.000872) | 0.006743 / 0.011008 (-0.004265) | 0.126503 / 0.038508 (0.087995) | 0.036918 / 0.023109 (0.013808) | 0.387372 / 0.275898 (0.111474) | 0.456930 / 0.323480 (0.133450) | 0.008038 / 0.007986 (0.000052) | 0.005082 / 0.004328 (0.000753) | 0.093312 / 0.004250 (0.089062) | 0.065440 / 0.037052 (0.028387) | 0.378172 / 0.258489 (0.119683) | 0.430049 / 0.293841 (0.136208) | 0.054372 / 0.128546 (-0.074174) | 0.021875 / 0.075646 (-0.053772) | 0.441722 / 0.419271 (0.022450) | 0.063716 / 0.043533 (0.020183) | 0.375718 / 0.255139 (0.120579) | 0.413688 / 0.283200 (0.130488) | 0.122583 / 0.141683 (-0.019100) | 1.835992 / 1.452155 (0.383838) | 1.915862 / 1.492716 (0.423145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275305 / 0.018006 (0.257299) | 0.617170 / 0.000490 (0.616680) | 0.006467 / 0.000200 (0.006267) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031057 / 0.037411 (-0.006354) | 0.135178 / 0.014526 (0.120653) | 0.139265 / 0.176557 (-0.037292) | 0.221597 / 0.737135 (-0.515538) | 0.147632 / 0.296338 (-0.148706) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.640621 / 0.215209 (0.425411) | 6.354359 / 2.077655 (4.276704) | 2.748945 / 1.504120 (1.244825) | 2.396637 / 1.541195 (0.855442) | 2.395193 / 1.468490 (0.926703) | 1.209604 / 4.584777 (-3.375173) | 5.626901 / 3.745712 (1.881189) | 3.300941 / 5.269862 (-1.968920) | 2.123598 / 4.565676 (-2.442078) | 0.144270 / 0.424275 (-0.280005) | 0.015114 / 0.007607 (0.007507) | 0.812352 / 0.226044 (0.586307) | 8.024250 / 2.268929 (5.755322) | 3.557589 / 55.444624 (-51.887036) | 2.840632 / 6.876477 (-4.035845) | 3.152319 / 2.142072 (1.010246) | 1.447232 / 4.805227 (-3.357995) | 0.251740 / 6.500664 (-6.248924) | 0.083725 / 0.075469 (0.008256) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.568032 / 1.841788 (-0.273755) | 18.463860 / 8.074308 (10.389552) | 21.217395 / 10.191392 (11.026003) | 0.228457 / 0.680424 (-0.451967) | 0.031398 / 0.534201 (-0.502803) | 0.547627 / 0.579283 (-0.031656) | 0.642921 / 0.434364 (0.208557) | 0.687857 / 0.540337 (0.147520) | 0.800940 / 1.386936 (-0.585996) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009933 / 0.011353 (-0.001420) | 0.006065 / 0.011008 (-0.004943) | 0.102556 / 0.038508 (0.064048) | 0.034646 / 0.023109 (0.011537) | 0.437951 / 0.275898 (0.162053) | 0.482439 / 0.323480 (0.158959) | 0.007715 / 0.007986 (-0.000271) | 0.007426 / 0.004328 (0.003098) | 0.096427 / 0.004250 (0.092177) | 0.052983 / 0.037052 (0.015930) | 0.464533 / 0.258489 (0.206044) | 0.484848 / 0.293841 (0.191007) | 0.050415 / 0.128546 (-0.078131) | 0.021001 / 0.075646 (-0.054645) | 0.121214 / 0.419271 (-0.298058) | 0.061658 / 0.043533 (0.018125) | 0.431898 / 0.255139 (0.176759) | 0.482106 / 0.283200 (0.198907) | 0.128524 / 0.141683 (-0.013159) | 1.775714 / 1.452155 (0.323559) | 1.904738 / 1.492716 (0.412021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287641 / 0.018006 (0.269635) | 0.600667 / 0.000490 (0.600178) | 0.005097 / 0.000200 (0.004897) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032836 / 0.037411 (-0.004575) | 0.133114 / 0.014526 (0.118588) | 0.150874 / 0.176557 (-0.025683) | 0.217069 / 0.737135 (-0.520066) | 0.160387 / 0.296338 (-0.135951) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668444 / 0.215209 (0.453235) | 6.240015 / 2.077655 (4.162360) | 2.808661 / 1.504120 (1.304542) | 2.336550 / 1.541195 (0.795356) | 2.538973 / 1.468490 (1.070483) | 1.189292 / 4.584777 (-3.395485) | 5.781028 / 3.745712 (2.035315) | 3.149895 / 5.269862 (-2.119967) | 2.130646 / 4.565676 (-2.435030) | 0.144944 / 0.424275 (-0.279331) | 0.014650 / 0.007607 (0.007043) | 0.792313 / 0.226044 (0.566269) | 7.933108 / 2.268929 (5.664180) | 3.527527 / 55.444624 (-51.917098) | 2.864271 / 6.876477 (-4.012205) | 3.098330 / 2.142072 (0.956258) | 1.421208 / 4.805227 (-3.384019) | 0.255638 / 6.500664 (-6.245026) | 0.086971 / 0.075469 (0.011502) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585317 / 1.841788 (-0.256471) | 18.643133 / 8.074308 (10.568825) | 21.921256 / 10.191392 (11.729864) | 0.215493 / 0.680424 (-0.464931) | 0.028348 / 0.534201 (-0.505853) | 0.556925 / 0.579283 (-0.022358) | 0.631480 / 0.434364 (0.197116) | 0.654026 / 0.540337 (0.113689) | 0.799727 / 1.386936 (-0.587209) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#62520514b524b5904c7e4f0beddab1971212a96a \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006516 / 0.011353 (-0.004837) | 0.004500 / 0.011008 (-0.006509) | 0.097639 / 0.038508 (0.059131) | 0.028336 / 0.023109 (0.005227) | 0.377263 / 0.275898 (0.101365) | 0.409209 / 0.323480 (0.085729) | 0.004832 / 0.007986 (-0.003154) | 0.004629 / 0.004328 (0.000301) | 0.075046 / 0.004250 (0.070795) | 0.034080 / 0.037052 (-0.002972) | 0.377565 / 0.258489 (0.119076) | 0.419204 / 0.293841 (0.125363) | 0.030343 / 0.128546 (-0.098203) | 0.011465 / 0.075646 (-0.064182) | 0.322777 / 0.419271 (-0.096494) | 0.043774 / 0.043533 (0.000241) | 0.375808 / 0.255139 (0.120669) | 0.402665 / 0.283200 (0.119465) | 0.086811 / 0.141683 (-0.054872) | 1.518686 / 1.452155 (0.066531) | 1.540381 / 1.492716 (0.047664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197730 / 0.018006 (0.179724) | 0.409285 / 0.000490 (0.408795) | 0.004739 / 0.000200 (0.004539) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022974 / 0.037411 (-0.014437) | 0.096843 / 0.014526 (0.082317) | 0.103241 / 0.176557 (-0.073316) | 0.163691 / 0.737135 (-0.573444) | 0.107905 / 0.296338 (-0.188433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449408 / 0.215209 (0.234199) | 4.501375 / 2.077655 (2.423720) | 2.181491 / 1.504120 (0.677371) | 1.986153 / 1.541195 (0.444958) | 2.024735 / 1.468490 (0.556245) | 0.695368 / 4.584777 (-3.889409) | 3.416912 / 3.745712 (-0.328800) | 1.893343 / 5.269862 (-3.376519) | 1.275535 / 4.565676 (-3.290142) | 0.082772 / 0.424275 (-0.341503) | 0.012365 / 0.007607 (0.004758) | 0.553859 / 0.226044 (0.327814) | 5.540014 / 2.268929 (3.271085) | 2.634298 / 55.444624 (-52.810326) | 2.286686 / 6.876477 (-4.589790) | 2.384402 / 2.142072 (0.242330) | 0.806413 / 4.805227 (-3.998814) | 0.151757 / 6.500664 (-6.348907) | 0.067155 / 0.075469 (-0.008314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198776 / 1.841788 (-0.643012) | 13.517434 / 8.074308 (5.443126) | 13.926300 / 10.191392 (3.734908) | 0.141887 / 0.680424 (-0.538537) | 0.016571 / 0.534201 (-0.517630) | 0.383179 / 0.579283 (-0.196104) | 0.395189 / 0.434364 (-0.039175) | 0.479635 / 0.540337 (-0.060702) | 0.570576 / 1.386936 (-0.816360) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006691 / 0.011353 (-0.004662) | 0.004634 / 0.011008 (-0.006375) | 0.077087 / 0.038508 (0.038579) | 0.028281 / 0.023109 (0.005172) | 0.340108 / 0.275898 (0.064210) | 0.370611 / 0.323480 (0.047131) | 0.004997 / 0.007986 (-0.002988) | 0.003336 / 0.004328 (-0.000992) | 0.074814 / 0.004250 (0.070563) | 0.039001 / 0.037052 (0.001948) | 0.344225 / 0.258489 (0.085736) | 0.380621 / 0.293841 (0.086780) | 0.030858 / 0.128546 (-0.097689) | 0.011623 / 0.075646 (-0.064023) | 0.085016 / 0.419271 (-0.334256) | 0.042378 / 0.043533 (-0.001155) | 0.341428 / 0.255139 (0.086289) | 0.364823 / 0.283200 (0.081624) | 0.096695 / 0.141683 (-0.044988) | 1.527683 / 1.452155 (0.075528) | 1.585361 / 1.492716 (0.092645) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.184280 / 0.018006 (0.166274) | 0.397845 / 0.000490 (0.397355) | 0.004415 / 0.000200 (0.004215) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024296 / 0.037411 (-0.013115) | 0.101053 / 0.014526 (0.086527) | 0.108968 / 0.176557 (-0.067589) | 0.155732 / 0.737135 (-0.581403) | 0.112604 / 0.296338 (-0.183735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440819 / 0.215209 (0.225609) | 4.394017 / 2.077655 (2.316363) | 2.092456 / 1.504120 (0.588336) | 1.880186 / 1.541195 (0.338991) | 1.918035 / 1.468490 (0.449545) | 0.698059 / 4.584777 (-3.886718) | 3.422598 / 3.745712 (-0.323114) | 1.860465 / 5.269862 (-3.409396) | 1.157788 / 4.565676 (-3.407889) | 0.083566 / 0.424275 (-0.340709) | 0.012440 / 0.007607 (0.004832) | 0.549526 / 0.226044 (0.323481) | 5.500623 / 2.268929 (3.231694) | 2.546980 / 55.444624 (-52.897644) | 2.199527 / 6.876477 (-4.676949) | 2.297276 / 2.142072 (0.155203) | 0.801580 / 4.805227 (-4.003648) | 0.151842 / 6.500664 (-6.348822) | 0.067165 / 0.075469 (-0.008305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.329097 / 1.841788 (-0.512691) | 13.830354 / 8.074308 (5.756046) | 14.155250 / 10.191392 (3.963858) | 0.144517 / 0.680424 (-0.535907) | 0.016738 / 0.534201 (-0.517463) | 0.379337 / 0.579283 (-0.199946) | 0.391382 / 0.434364 (-0.042982) | 0.459153 / 0.540337 (-0.081184) | 0.547287 / 1.386936 (-0.839649) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2efb0289c887ec60d54e0715cd85c111cb45f9ee \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007176 / 0.011353 (-0.004177) | 0.005125 / 0.011008 (-0.005883) | 0.096060 / 0.038508 (0.057552) | 0.033262 / 0.023109 (0.010152) | 0.311461 / 0.275898 (0.035563) | 0.340673 / 0.323480 (0.017193) | 0.005700 / 0.007986 (-0.002286) | 0.005223 / 0.004328 (0.000894) | 0.072812 / 0.004250 (0.068561) | 0.042078 / 0.037052 (0.005025) | 0.320042 / 0.258489 (0.061553) | 0.346539 / 0.293841 (0.052698) | 0.035284 / 0.128546 (-0.093262) | 0.012021 / 0.075646 (-0.063625) | 0.331555 / 0.419271 (-0.087717) | 0.051058 / 0.043533 (0.007525) | 0.303001 / 0.255139 (0.047862) | 0.328431 / 0.283200 (0.045231) | 0.100954 / 0.141683 (-0.040729) | 1.407445 / 1.452155 (-0.044710) | 1.512826 / 1.492716 (0.020110) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216442 / 0.018006 (0.198436) | 0.446298 / 0.000490 (0.445809) | 0.004701 / 0.000200 (0.004501) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028088 / 0.037411 (-0.009324) | 0.108669 / 0.014526 (0.094144) | 0.119597 / 0.176557 (-0.056960) | 0.178249 / 0.737135 (-0.558886) | 0.123914 / 0.296338 (-0.172424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.413437 / 0.215209 (0.198228) | 4.136602 / 2.077655 (2.058947) | 1.875872 / 1.504120 (0.371752) | 1.680783 / 1.541195 (0.139588) | 1.757059 / 1.468490 (0.288569) | 0.711080 / 4.584777 (-3.873697) | 3.791701 / 3.745712 (0.045989) | 2.111612 / 5.269862 (-3.158250) | 1.351204 / 4.565676 (-3.214473) | 0.086477 / 0.424275 (-0.337798) | 0.012359 / 0.007607 (0.004752) | 0.504984 / 0.226044 (0.278940) | 5.040456 / 2.268929 (2.771527) | 2.266946 / 55.444624 (-53.177679) | 1.957827 / 6.876477 (-4.918650) | 2.120490 / 2.142072 (-0.021583) | 0.856148 / 4.805227 (-3.949079) | 0.172414 / 6.500664 (-6.328250) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198163 / 1.841788 (-0.643625) | 14.944930 / 8.074308 (6.870622) | 14.317196 / 10.191392 (4.125804) | 0.166104 / 0.680424 (-0.514320) | 0.017443 / 0.534201 (-0.516758) | 0.423025 / 0.579283 (-0.156258) | 0.437476 / 0.434364 (0.003112) | 0.500156 / 0.540337 (-0.040181) | 0.606226 / 1.386936 (-0.780710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007417 / 0.011353 (-0.003936) | 0.005143 / 0.011008 (-0.005865) | 0.076401 / 0.038508 (0.037893) | 0.034818 / 0.023109 (0.011709) | 0.339633 / 0.275898 (0.063735) | 0.373839 / 0.323480 (0.050359) | 0.006004 / 0.007986 (-0.001982) | 0.005403 / 0.004328 (0.001075) | 0.074150 / 0.004250 (0.069899) | 0.050489 / 0.037052 (0.013436) | 0.343357 / 0.258489 (0.084868) | 0.377009 / 0.293841 (0.083168) | 0.035921 / 0.128546 (-0.092625) | 0.012197 / 0.075646 (-0.063449) | 0.087992 / 0.419271 (-0.331279) | 0.049452 / 0.043533 (0.005919) | 0.340495 / 0.255139 (0.085356) | 0.360277 / 0.283200 (0.077077) | 0.111114 / 0.141683 (-0.030569) | 1.463888 / 1.452155 (0.011734) | 1.548320 / 1.492716 (0.055604) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228437 / 0.018006 (0.210431) | 0.445120 / 0.000490 (0.444631) | 0.000392 / 0.000200 (0.000192) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029965 / 0.037411 (-0.007446) | 0.113484 / 0.014526 (0.098958) | 0.125249 / 0.176557 (-0.051308) | 0.177201 / 0.737135 (-0.559934) | 0.128750 / 0.296338 (-0.167589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420089 / 0.215209 (0.204880) | 4.195772 / 2.077655 (2.118117) | 2.021539 / 1.504120 (0.517419) | 1.825118 / 1.541195 (0.283924) | 1.904090 / 1.468490 (0.435600) | 0.716276 / 4.584777 (-3.868501) | 3.742257 / 3.745712 (-0.003455) | 3.368880 / 5.269862 (-1.900981) | 1.728285 / 4.565676 (-2.837392) | 0.087656 / 0.424275 (-0.336619) | 0.012263 / 0.007607 (0.004656) | 0.524321 / 0.226044 (0.298277) | 5.217610 / 2.268929 (2.948682) | 2.474670 / 55.444624 (-52.969955) | 2.135452 / 6.876477 (-4.741025) | 2.292578 / 2.142072 (0.150505) | 0.852109 / 4.805227 (-3.953119) | 0.172031 / 6.500664 (-6.328633) | 0.065230 / 0.075469 (-0.010240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.260494 / 1.841788 (-0.581293) | 15.019167 / 8.074308 (6.944859) | 14.647586 / 10.191392 (4.456193) | 0.170578 / 0.680424 (-0.509846) | 0.017619 / 0.534201 (-0.516582) | 0.423116 / 0.579283 (-0.156167) | 0.426680 / 0.434364 (-0.007684) | 0.519563 / 0.540337 (-0.020775) | 0.619335 / 1.386936 (-0.767601) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e210dc20c19b5e6af05df9ca6e82984dfb42465f \"CML watermark\")\n" ]
null
[]
Spark docs
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5796/timeline
Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701 cc @maddiedawson
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5796.diff", "html_url": "https://github.com/huggingface/datasets/pull/5796", "merged_at": "2023-04-27T16:34:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/5796.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5796" }
1,685,451,919
https://api.github.com/repos/huggingface/datasets/issues/5796/comments
PR_kwDODunzps5PORm-
null
5,796
https://api.github.com/repos/huggingface/datasets/issues/5796/events
true
closed
2023-04-26T17:09:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/5795
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5795
[]
false
2023-04-26T17:49:03Z
2023-04-26T17:39:12Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010844 / 0.011353 (-0.000509) | 0.007329 / 0.011008 (-0.003680) | 0.133764 / 0.038508 (0.095256) | 0.040213 / 0.023109 (0.017103) | 0.413466 / 0.275898 (0.137568) | 0.452860 / 0.323480 (0.129380) | 0.008109 / 0.007986 (0.000123) | 0.005773 / 0.004328 (0.001444) | 0.109969 / 0.004250 (0.105718) | 0.053001 / 0.037052 (0.015949) | 0.416377 / 0.258489 (0.157888) | 0.477486 / 0.293841 (0.183645) | 0.056556 / 0.128546 (-0.071990) | 0.024322 / 0.075646 (-0.051324) | 0.437750 / 0.419271 (0.018479) | 0.087732 / 0.043533 (0.044199) | 0.421540 / 0.255139 (0.166401) | 0.429143 / 0.283200 (0.145944) | 0.144864 / 0.141683 (0.003181) | 1.882785 / 1.452155 (0.430631) | 1.980721 / 1.492716 (0.488005) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.285497 / 0.018006 (0.267491) | 0.601820 / 0.000490 (0.601331) | 0.005003 / 0.000200 (0.004804) | 0.000122 / 0.000054 (0.000067) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030673 / 0.037411 (-0.006739) | 0.126883 / 0.014526 (0.112357) | 0.137677 / 0.176557 (-0.038880) | 0.211504 / 0.737135 (-0.525632) | 0.144752 / 0.296338 (-0.151587) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665845 / 0.215209 (0.450636) | 6.369040 / 2.077655 (4.291385) | 2.708979 / 1.504120 (1.204859) | 2.370842 / 1.541195 (0.829647) | 2.445987 / 1.468490 (0.977497) | 1.260806 / 4.584777 (-3.323971) | 5.979216 / 3.745712 (2.233504) | 3.334350 / 5.269862 (-1.935512) | 2.187298 / 4.565676 (-2.378379) | 0.155494 / 0.424275 (-0.268781) | 0.017351 / 0.007607 (0.009744) | 0.853626 / 0.226044 (0.627581) | 8.375001 / 2.268929 (6.106072) | 3.528312 / 55.444624 (-51.916313) | 2.890509 / 6.876477 (-3.985968) | 3.051016 / 2.142072 (0.908944) | 1.529811 / 4.805227 (-3.275416) | 0.273883 / 6.500664 (-6.226781) | 0.086617 / 0.075469 (0.011148) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648231 / 1.841788 (-0.193557) | 19.487109 / 8.074308 (11.412801) | 23.474621 / 10.191392 (13.283229) | 0.221392 / 0.680424 (-0.459032) | 0.028878 / 0.534201 (-0.505323) | 0.582302 / 0.579283 (0.003019) | 0.615059 / 0.434364 (0.180695) | 0.656082 / 0.540337 (0.115745) | 0.740544 / 1.386936 (-0.646392) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010687 / 0.011353 (-0.000665) | 0.007114 / 0.011008 (-0.003894) | 0.135426 / 0.038508 (0.096918) | 0.041027 / 0.023109 (0.017918) | 0.466441 / 0.275898 (0.190543) | 0.503545 / 0.323480 (0.180065) | 0.009418 / 0.007986 (0.001432) | 0.004976 / 0.004328 (0.000647) | 0.101342 / 0.004250 (0.097092) | 0.058289 / 0.037052 (0.021237) | 0.473715 / 0.258489 (0.215226) | 0.539556 / 0.293841 (0.245715) | 0.063138 / 0.128546 (-0.065408) | 0.020429 / 0.075646 (-0.055217) | 0.124179 / 0.419271 (-0.295093) | 0.066400 / 0.043533 (0.022867) | 0.450793 / 0.255139 (0.195654) | 0.494163 / 0.283200 (0.210964) | 0.131179 / 0.141683 (-0.010504) | 1.876396 / 1.452155 (0.424241) | 1.974148 / 1.492716 (0.481432) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313362 / 0.018006 (0.295356) | 0.602618 / 0.000490 (0.602129) | 0.008279 / 0.000200 (0.008079) | 0.000155 / 0.000054 (0.000101) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037250 / 0.037411 (-0.000161) | 0.144151 / 0.014526 (0.129625) | 0.155733 / 0.176557 (-0.020824) | 0.214334 / 0.737135 (-0.522801) | 0.167124 / 0.296338 (-0.129214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.686471 / 0.215209 (0.471262) | 6.749174 / 2.077655 (4.671520) | 3.024941 / 1.504120 (1.520821) | 2.553363 / 1.541195 (1.012168) | 2.679107 / 1.468490 (1.210617) | 1.317212 / 4.584777 (-3.267565) | 5.917575 / 3.745712 (2.171862) | 3.412715 / 5.269862 (-1.857146) | 2.203478 / 4.565676 (-2.362198) | 0.150387 / 0.424275 (-0.273888) | 0.015977 / 0.007607 (0.008370) | 0.862999 / 0.226044 (0.636954) | 8.706459 / 2.268929 (6.437530) | 3.762648 / 55.444624 (-51.681977) | 2.992544 / 6.876477 (-3.883933) | 3.135796 / 2.142072 (0.993724) | 1.504140 / 4.805227 (-3.301088) | 0.268265 / 6.500664 (-6.232399) | 0.083297 / 0.075469 (0.007828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.690193 / 1.841788 (-0.151594) | 19.912854 / 8.074308 (11.838546) | 23.568217 / 10.191392 (13.376825) | 0.285125 / 0.680424 (-0.395299) | 0.030593 / 0.534201 (-0.503608) | 0.565305 / 0.579283 (-0.013978) | 0.659283 / 0.434364 (0.224919) | 0.678864 / 0.540337 (0.138527) | 0.793634 / 1.386936 (-0.593302) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d0edbe3f3258b7e580d1b58c0eea6637b5e22b2 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011615 / 0.011353 (0.000262) | 0.006716 / 0.011008 (-0.004292) | 0.146868 / 0.038508 (0.108360) | 0.037621 / 0.023109 (0.014512) | 0.425563 / 0.275898 (0.149664) | 0.483217 / 0.323480 (0.159737) | 0.007830 / 0.007986 (-0.000156) | 0.005940 / 0.004328 (0.001612) | 0.100771 / 0.004250 (0.096521) | 0.063907 / 0.037052 (0.026854) | 0.422993 / 0.258489 (0.164503) | 0.496514 / 0.293841 (0.202673) | 0.056004 / 0.128546 (-0.072542) | 0.021441 / 0.075646 (-0.054206) | 0.453589 / 0.419271 (0.034317) | 0.067555 / 0.043533 (0.024022) | 0.442490 / 0.255139 (0.187351) | 0.503941 / 0.283200 (0.220742) | 0.134023 / 0.141683 (-0.007660) | 1.886329 / 1.452155 (0.434175) | 2.030867 / 1.492716 (0.538150) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288063 / 0.018006 (0.270057) | 0.627177 / 0.000490 (0.626687) | 0.006335 / 0.000200 (0.006135) | 0.000171 / 0.000054 (0.000116) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032424 / 0.037411 (-0.004987) | 0.132749 / 0.014526 (0.118223) | 0.144727 / 0.176557 (-0.031829) | 0.232577 / 0.737135 (-0.504558) | 0.157315 / 0.296338 (-0.139024) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.623058 / 0.215209 (0.407849) | 6.272447 / 2.077655 (4.194792) | 2.506778 / 1.504120 (1.002658) | 2.203094 / 1.541195 (0.661899) | 2.346972 / 1.468490 (0.878482) | 1.358498 / 4.584777 (-3.226279) | 5.879670 / 3.745712 (2.133958) | 5.818406 / 5.269862 (0.548545) | 3.231936 / 4.565676 (-1.333741) | 0.154013 / 0.424275 (-0.270263) | 0.021541 / 0.007607 (0.013934) | 0.823746 / 0.226044 (0.597702) | 8.140304 / 2.268929 (5.871375) | 3.366911 / 55.444624 (-52.077714) | 2.696856 / 6.876477 (-4.179621) | 2.845743 / 2.142072 (0.703671) | 1.522363 / 4.805227 (-3.282864) | 0.278938 / 6.500664 (-6.221726) | 0.085044 / 0.075469 (0.009575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.681348 / 1.841788 (-0.160440) | 19.686703 / 8.074308 (11.612395) | 22.995655 / 10.191392 (12.804263) | 0.218876 / 0.680424 (-0.461548) | 0.029334 / 0.534201 (-0.504867) | 0.560846 / 0.579283 (-0.018438) | 0.645210 / 0.434364 (0.210846) | 0.697842 / 0.540337 (0.157505) | 0.832875 / 1.386936 (-0.554061) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009509 / 0.011353 (-0.001844) | 0.006471 / 0.011008 (-0.004537) | 0.101477 / 0.038508 (0.062969) | 0.035281 / 0.023109 (0.012171) | 0.470032 / 0.275898 (0.194134) | 0.501475 / 0.323480 (0.177995) | 0.007641 / 0.007986 (-0.000344) | 0.006784 / 0.004328 (0.002455) | 0.096111 / 0.004250 (0.091861) | 0.055199 / 0.037052 (0.018146) | 0.470095 / 0.258489 (0.211606) | 0.530955 / 0.293841 (0.237114) | 0.056161 / 0.128546 (-0.072385) | 0.022055 / 0.075646 (-0.053591) | 0.121585 / 0.419271 (-0.297686) | 0.063736 / 0.043533 (0.020203) | 0.470771 / 0.255139 (0.215632) | 0.490546 / 0.283200 (0.207346) | 0.128825 / 0.141683 (-0.012858) | 1.898639 / 1.452155 (0.446484) | 2.052305 / 1.492716 (0.559589) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.322526 / 0.018006 (0.304520) | 0.628096 / 0.000490 (0.627607) | 0.006837 / 0.000200 (0.006637) | 0.000199 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033830 / 0.037411 (-0.003581) | 0.136217 / 0.014526 (0.121691) | 0.147006 / 0.176557 (-0.029551) | 0.203950 / 0.737135 (-0.533185) | 0.150327 / 0.296338 (-0.146011) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654287 / 0.215209 (0.439078) | 6.430306 / 2.077655 (4.352651) | 2.881750 / 1.504120 (1.377630) | 2.489505 / 1.541195 (0.948310) | 2.543037 / 1.468490 (1.074547) | 1.226682 / 4.584777 (-3.358094) | 5.902076 / 3.745712 (2.156364) | 3.335344 / 5.269862 (-1.934518) | 2.156738 / 4.565676 (-2.408939) | 0.151804 / 0.424275 (-0.272472) | 0.015238 / 0.007607 (0.007631) | 0.816364 / 0.226044 (0.590319) | 8.126367 / 2.268929 (5.857438) | 3.653222 / 55.444624 (-51.791402) | 2.886667 / 6.876477 (-3.989809) | 3.120852 / 2.142072 (0.978779) | 1.421423 / 4.805227 (-3.383804) | 0.264590 / 6.500664 (-6.236074) | 0.085716 / 0.075469 (0.010247) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.745258 / 1.841788 (-0.096530) | 19.379253 / 8.074308 (11.304945) | 23.827046 / 10.191392 (13.635654) | 0.267702 / 0.680424 (-0.412722) | 0.030253 / 0.534201 (-0.503948) | 0.542037 / 0.579283 (-0.037246) | 0.655946 / 0.434364 (0.221582) | 0.683525 / 0.540337 (0.143188) | 0.831333 / 1.386936 (-0.555603) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5b011a258329375aa4dc7b414bd4e7b6363c5357 \"CML watermark\")\n" ]
null
[]
Fix spark imports
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5795/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5795.diff", "html_url": "https://github.com/huggingface/datasets/pull/5795", "merged_at": "2023-04-26T17:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5795.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5795" }
1,685,414,505
https://api.github.com/repos/huggingface/datasets/issues/5795/comments
PR_kwDODunzps5POJo8
null
5,795
https://api.github.com/repos/huggingface/datasets/issues/5795/events
true
open
2023-04-26T14:55:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/5794
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5794
[]
false
2023-04-26T14:55:23Z
null
null
[]
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
CI ZeroDivisionError
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5794/timeline
Sometimes when running our CI on Windows, we get a ZeroDivisionError: ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero ``` See for example: - https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110 - https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688 ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1 def speed_metrics(split, start_time, num_samples=None, num_steps=None): """ Measure and return speed performance metrics. This function requires a time snapshot `start_time` before the operation to be measured starts and this function should be run immediately after the operation to be measured has completed. Args: - split: name to prefix metric (like train, eval, test...) - start_time: operation start time - num_samples: number of samples processed """ runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: > samples_per_second = num_samples / runtime E ZeroDivisionError: float division by zero C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError ```
https://api.github.com/repos/huggingface/datasets
null
1,685,196,061
https://api.github.com/repos/huggingface/datasets/issues/5794/comments
I_kwDODunzps5kcg0d
null
5,794
https://api.github.com/repos/huggingface/datasets/issues/5794/events
false
closed
2023-04-26T10:50:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/5793
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5793/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5793/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwangyi", "id": 39762734, "login": "jiangwangyi", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwangyi" }
https://github.com/huggingface/datasets/issues/5793
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2023-06-13T15:57:06Z
2023-06-13T15:57:06Z
null
[ "Hi ! Thanks for reporting, I'm working on it ;)" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
IterableDataset.with_format("torch") not working
NONE
https://api.github.com/repos/huggingface/datasets/issues/5793/timeline
### Describe the bug After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged. ### Steps to reproduce the bug ```python from datasets import IterableDataset def gen(): for i in range(4): yield {"a": [i] * 4} dataset = IterableDataset.from_generator(gen).with_format("torch") next(iter(dataset)) ``` ### Expected behavior `{"a": torch.tensor([0, 0, 0, 0])}` is expected, but `{"a": [0, 0, 0, 0]}` is observed. ### Environment info ```bash platform==ubuntu 22.04.01 python==3.10.9 datasets==2.11.0 ```
https://api.github.com/repos/huggingface/datasets
null
1,684,777,320
https://api.github.com/repos/huggingface/datasets/issues/5793/comments
I_kwDODunzps5ka6lo
null
5,793
https://api.github.com/repos/huggingface/datasets/issues/5793/events
false
closed
2023-04-25T16:14:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/5791
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5791/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5791/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/31293221?v=4", "events_url": "https://api.github.com/users/sebasmos/events{/privacy}", "followers_url": "https://api.github.com/users/sebasmos/followers", "following_url": "https://api.github.com/users/sebasmos/following{/other_user}", "gists_url": "https://api.github.com/users/sebasmos/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sebasmos", "id": 31293221, "login": "sebasmos", "node_id": "MDQ6VXNlcjMxMjkzMjIx", "organizations_url": "https://api.github.com/users/sebasmos/orgs", "received_events_url": "https://api.github.com/users/sebasmos/received_events", "repos_url": "https://api.github.com/users/sebasmos/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sebasmos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebasmos/subscriptions", "type": "User", "url": "https://api.github.com/users/sebasmos" }
https://github.com/huggingface/datasets/issues/5791
[]
false
2024-01-15T16:40:33Z
2024-01-15T16:40:16Z
null
[ "The issue with multichannel TIFF images has already been reported in Pillow (https://github.com/python-pillow/Pillow/issues/1888). We can't do much about it on our side.\r\n\r\nStill, to avoid the error, you can bypass the default Pillow decoding and define a custom one as follows:\r\n```python\r\nimport tifffile # pip install tifffile\r\n\r\ndset = dset.cast_column(\"image\", datasets.Image(decode=False))\r\n\r\ndef decode_mutlichannel_tiff(batch):\r\n batch[\"image\"] = [tifffile.imread(image[\"path\"]) for image in batch[\"image\"]]\r\n return batch\r\n\r\ndset.set_transform(decode_mutlichannel_tiff)\r\n```\r\n\r\nRegarding the annotations, in which format are they? In the COCO format? I think this is a bit too specific to have a built-in loader for it.", "This snippet is awesome! I know I probably should have gotten deeper in to the docs to find cast_column and set_transform, but perhaps a link ushering folks to that documentation or even this thread somewhere in https://huggingface.co/docs/datasets/image_load would be helpful? Thanks again for the snippet", "We have a section on custom decoding [here](https://huggingface.co/docs/datasets/process#format-transform) (for the audio case though)", "Btw, we can close this issue as it should be addressed in Pillow rather than here. ", "For sure, if image based stuff becomes a priority I think guiding folks to an image decoder section would be really helpful, but im just one dev :) and I know priorities gotta be balanced so no worries. Thanks again for the snippet, agreed we can close" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
TIFF/TIF support
NONE
https://api.github.com/repos/huggingface/datasets/issues/5791/timeline
### Feature request I currently have a dataset (with tiff and json files) where I have to do this: `wget path_to_data/images.zip && unzip images.zip` `wget path_to_data/annotations.zip && unzip annotations.zip` Would it make sense a contribution that supports these type of files? ### Motivation instead of using `load_dataset` have to use wget as these files are not supported for annotations with JSON and images with TIFF files. Additionally to this, the PIL formatting from datasets does not read correctly the image channels with TIFF format, besides multichannel adaptation might be necessary as well (as my data e.g has more than 3 channels) ### Your contribution 1. Support TIFF images over multi channel format 2. Support JSON annotations
https://api.github.com/repos/huggingface/datasets
null
1,683,473,943
https://api.github.com/repos/huggingface/datasets/issues/5791/comments
I_kwDODunzps5kV8YX
null
5,791
https://api.github.com/repos/huggingface/datasets/issues/5791/events
false
closed
2023-04-25T13:57:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/5790
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5790
[]
false
2023-04-26T13:43:08Z
2023-04-26T13:35:47Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007852 / 0.011353 (-0.003500) | 0.005804 / 0.011008 (-0.005204) | 0.098268 / 0.038508 (0.059760) | 0.036440 / 0.023109 (0.013331) | 0.299952 / 0.275898 (0.024054) | 0.335590 / 0.323480 (0.012111) | 0.006332 / 0.007986 (-0.001653) | 0.004218 / 0.004328 (-0.000110) | 0.074733 / 0.004250 (0.070483) | 0.055252 / 0.037052 (0.018200) | 0.300854 / 0.258489 (0.042365) | 0.353442 / 0.293841 (0.059601) | 0.036447 / 0.128546 (-0.092099) | 0.012638 / 0.075646 (-0.063009) | 0.336680 / 0.419271 (-0.082591) | 0.052436 / 0.043533 (0.008903) | 0.292606 / 0.255139 (0.037467) | 0.319676 / 0.283200 (0.036476) | 0.111137 / 0.141683 (-0.030546) | 1.449569 / 1.452155 (-0.002586) | 1.558110 / 1.492716 (0.065394) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.306043 / 0.018006 (0.288037) | 0.563174 / 0.000490 (0.562684) | 0.032227 / 0.000200 (0.032027) | 0.000491 / 0.000054 (0.000436) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029874 / 0.037411 (-0.007537) | 0.109330 / 0.014526 (0.094805) | 0.122579 / 0.176557 (-0.053978) | 0.181398 / 0.737135 (-0.555737) | 0.127124 / 0.296338 (-0.169215) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417950 / 0.215209 (0.202741) | 4.163883 / 2.077655 (2.086228) | 1.985209 / 1.504120 (0.481089) | 1.793660 / 1.541195 (0.252465) | 1.895193 / 1.468490 (0.426703) | 0.694331 / 4.584777 (-3.890446) | 3.820170 / 3.745712 (0.074458) | 2.180556 / 5.269862 (-3.089305) | 1.490671 / 4.565676 (-3.075006) | 0.086132 / 0.424275 (-0.338143) | 0.012289 / 0.007607 (0.004682) | 0.511182 / 0.226044 (0.285137) | 5.117855 / 2.268929 (2.848927) | 2.403914 / 55.444624 (-53.040710) | 2.071107 / 6.876477 (-4.805369) | 2.184108 / 2.142072 (0.042036) | 0.835028 / 4.805227 (-3.970199) | 0.167707 / 6.500664 (-6.332957) | 0.066724 / 0.075469 (-0.008746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203921 / 1.841788 (-0.637867) | 15.214676 / 8.074308 (7.140368) | 14.971337 / 10.191392 (4.779945) | 0.170225 / 0.680424 (-0.510199) | 0.017924 / 0.534201 (-0.516277) | 0.428532 / 0.579283 (-0.150751) | 0.449157 / 0.434364 (0.014793) | 0.507723 / 0.540337 (-0.032614) | 0.615331 / 1.386936 (-0.771605) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008172 / 0.011353 (-0.003181) | 0.005405 / 0.011008 (-0.005603) | 0.074684 / 0.038508 (0.036176) | 0.039133 / 0.023109 (0.016024) | 0.342598 / 0.275898 (0.066700) | 0.377752 / 0.323480 (0.054272) | 0.006655 / 0.007986 (-0.001331) | 0.005788 / 0.004328 (0.001459) | 0.074014 / 0.004250 (0.069763) | 0.056225 / 0.037052 (0.019173) | 0.342330 / 0.258489 (0.083841) | 0.381052 / 0.293841 (0.087211) | 0.036574 / 0.128546 (-0.091973) | 0.012472 / 0.075646 (-0.063174) | 0.087574 / 0.419271 (-0.331698) | 0.050178 / 0.043533 (0.006646) | 0.351116 / 0.255139 (0.095977) | 0.363772 / 0.283200 (0.080572) | 0.118313 / 0.141683 (-0.023370) | 1.436691 / 1.452155 (-0.015463) | 1.551397 / 1.492716 (0.058680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265201 / 0.018006 (0.247195) | 0.561855 / 0.000490 (0.561366) | 0.000463 / 0.000200 (0.000263) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030540 / 0.037411 (-0.006871) | 0.118815 / 0.014526 (0.104289) | 0.127689 / 0.176557 (-0.048868) | 0.176211 / 0.737135 (-0.560924) | 0.133130 / 0.296338 (-0.163208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416318 / 0.215209 (0.201109) | 4.146806 / 2.077655 (2.069151) | 1.983437 / 1.504120 (0.479317) | 1.799733 / 1.541195 (0.258539) | 1.889026 / 1.468490 (0.420536) | 0.723330 / 4.584777 (-3.861447) | 3.817795 / 3.745712 (0.072083) | 2.158449 / 5.269862 (-3.111413) | 1.377348 / 4.565676 (-3.188328) | 0.088504 / 0.424275 (-0.335771) | 0.012560 / 0.007607 (0.004953) | 0.530382 / 0.226044 (0.304337) | 5.308529 / 2.268929 (3.039600) | 2.469655 / 55.444624 (-52.974970) | 2.136209 / 6.876477 (-4.740267) | 2.322997 / 2.142072 (0.180924) | 0.861396 / 4.805227 (-3.943831) | 0.172747 / 6.500664 (-6.327917) | 0.067617 / 0.075469 (-0.007852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.263225 / 1.841788 (-0.578563) | 15.878025 / 8.074308 (7.803717) | 14.815627 / 10.191392 (4.624235) | 0.148722 / 0.680424 (-0.531702) | 0.018071 / 0.534201 (-0.516130) | 0.428389 / 0.579283 (-0.150894) | 0.428635 / 0.434364 (-0.005729) | 0.496953 / 0.540337 (-0.043385) | 0.592783 / 1.386936 (-0.794153) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d2e5568dc7a47f9a99678d2889bd2e3c33afdd00 \"CML watermark\")\n" ]
null
[]
Allow to run CI on push to ci-branch
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5790/timeline
This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR. - This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...) Note that to build the documentation, we already allow it on push to a branch named "doc-builder*". See: - #5788 CC: @Wauplin
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5790.diff", "html_url": "https://github.com/huggingface/datasets/pull/5790", "merged_at": "2023-04-26T13:35:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/5790.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5790" }
1,683,229,126
https://api.github.com/repos/huggingface/datasets/issues/5790/comments
PR_kwDODunzps5PG0mJ
null
5,790
https://api.github.com/repos/huggingface/datasets/issues/5790/events
true
open
2023-04-25T07:40:02Z
null
https://api.github.com/repos/huggingface/datasets/issues/5789
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5789/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5789/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5789
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-25T07:40:03Z
null
null
[]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Support streaming datasets that use jsonlines
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5789/timeline
Extend support for streaming datasets that use `jsonlines.open`. Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`: ``` FileNotFoundError: [Errno 2] No such file or directory: 'https://...' ``` See: - https://huggingface.co/datasets/masakhane/afriqa/discussions/1
https://api.github.com/repos/huggingface/datasets
null
1,682,611,179
https://api.github.com/repos/huggingface/datasets/issues/5789/comments
I_kwDODunzps5kSpvr
null
5,789
https://api.github.com/repos/huggingface/datasets/issues/5789/events
false
closed
2023-04-24T12:13:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5788
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Wauplin", "id": 11801849, "login": "Wauplin", "node_id": "MDQ6VXNlcjExODAxODQ5", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "repos_url": "https://api.github.com/users/Wauplin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "type": "User", "url": "https://api.github.com/users/Wauplin" }
https://github.com/huggingface/datasets/pull/5788
[]
false
2023-04-25T14:32:56Z
2023-04-25T14:25:30Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007343 / 0.011353 (-0.004010) | 0.005145 / 0.011008 (-0.005863) | 0.099820 / 0.038508 (0.061312) | 0.033487 / 0.023109 (0.010378) | 0.313069 / 0.275898 (0.037171) | 0.335420 / 0.323480 (0.011940) | 0.005959 / 0.007986 (-0.002027) | 0.005373 / 0.004328 (0.001044) | 0.076568 / 0.004250 (0.072317) | 0.048702 / 0.037052 (0.011650) | 0.322957 / 0.258489 (0.064468) | 0.363044 / 0.293841 (0.069203) | 0.035070 / 0.128546 (-0.093476) | 0.012029 / 0.075646 (-0.063618) | 0.334664 / 0.419271 (-0.084607) | 0.050549 / 0.043533 (0.007017) | 0.310113 / 0.255139 (0.054974) | 0.324405 / 0.283200 (0.041205) | 0.097596 / 0.141683 (-0.044087) | 1.440741 / 1.452155 (-0.011414) | 1.531194 / 1.492716 (0.038478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220799 / 0.018006 (0.202793) | 0.438158 / 0.000490 (0.437668) | 0.007737 / 0.000200 (0.007537) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026888 / 0.037411 (-0.010523) | 0.106281 / 0.014526 (0.091755) | 0.117419 / 0.176557 (-0.059138) | 0.179144 / 0.737135 (-0.557992) | 0.122477 / 0.296338 (-0.173861) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412667 / 0.215209 (0.197458) | 4.108784 / 2.077655 (2.031129) | 1.834300 / 1.504120 (0.330180) | 1.627256 / 1.541195 (0.086061) | 1.691036 / 1.468490 (0.222546) | 0.713405 / 4.584777 (-3.871372) | 3.839262 / 3.745712 (0.093550) | 2.108453 / 5.269862 (-3.161408) | 1.340740 / 4.565676 (-3.224936) | 0.087776 / 0.424275 (-0.336499) | 0.012730 / 0.007607 (0.005123) | 0.505323 / 0.226044 (0.279279) | 5.085176 / 2.268929 (2.816247) | 2.307165 / 55.444624 (-53.137459) | 1.936771 / 6.876477 (-4.939706) | 2.097391 / 2.142072 (-0.044681) | 0.856215 / 4.805227 (-3.949012) | 0.171826 / 6.500664 (-6.328838) | 0.066603 / 0.075469 (-0.008866) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202126 / 1.841788 (-0.639661) | 15.173598 / 8.074308 (7.099290) | 15.012645 / 10.191392 (4.821253) | 0.162187 / 0.680424 (-0.518237) | 0.017462 / 0.534201 (-0.516739) | 0.423895 / 0.579283 (-0.155388) | 0.432010 / 0.434364 (-0.002354) | 0.503234 / 0.540337 (-0.037104) | 0.598948 / 1.386936 (-0.787988) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007099 / 0.011353 (-0.004254) | 0.005167 / 0.011008 (-0.005841) | 0.075551 / 0.038508 (0.037043) | 0.033050 / 0.023109 (0.009940) | 0.339629 / 0.275898 (0.063731) | 0.380486 / 0.323480 (0.057006) | 0.005776 / 0.007986 (-0.002209) | 0.004029 / 0.004328 (-0.000299) | 0.075074 / 0.004250 (0.070823) | 0.046709 / 0.037052 (0.009656) | 0.340203 / 0.258489 (0.081714) | 0.380849 / 0.293841 (0.087008) | 0.035027 / 0.128546 (-0.093519) | 0.012226 / 0.075646 (-0.063420) | 0.087525 / 0.419271 (-0.331747) | 0.049361 / 0.043533 (0.005828) | 0.341854 / 0.255139 (0.086715) | 0.359590 / 0.283200 (0.076390) | 0.100102 / 0.141683 (-0.041581) | 1.482759 / 1.452155 (0.030605) | 1.569905 / 1.492716 (0.077189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213615 / 0.018006 (0.195609) | 0.441117 / 0.000490 (0.440628) | 0.004932 / 0.000200 (0.004732) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031313 / 0.037411 (-0.006098) | 0.110191 / 0.014526 (0.095665) | 0.125320 / 0.176557 (-0.051237) | 0.177658 / 0.737135 (-0.559477) | 0.127928 / 0.296338 (-0.168410) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426952 / 0.215209 (0.211743) | 4.247731 / 2.077655 (2.170076) | 2.107318 / 1.504120 (0.603198) | 1.843845 / 1.541195 (0.302650) | 1.894822 / 1.468490 (0.426332) | 0.696232 / 4.584777 (-3.888545) | 3.826516 / 3.745712 (0.080804) | 2.126688 / 5.269862 (-3.143174) | 1.327062 / 4.565676 (-3.238615) | 0.085693 / 0.424275 (-0.338582) | 0.012226 / 0.007607 (0.004619) | 0.521904 / 0.226044 (0.295859) | 5.219798 / 2.268929 (2.950869) | 2.524908 / 55.444624 (-52.919716) | 2.212078 / 6.876477 (-4.664399) | 2.373944 / 2.142072 (0.231871) | 0.833846 / 4.805227 (-3.971381) | 0.169639 / 6.500664 (-6.331025) | 0.064538 / 0.075469 (-0.010931) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254930 / 1.841788 (-0.586858) | 15.585277 / 8.074308 (7.510969) | 14.762857 / 10.191392 (4.571465) | 0.146959 / 0.680424 (-0.533465) | 0.017451 / 0.534201 (-0.516750) | 0.424469 / 0.579283 (-0.154814) | 0.422359 / 0.434364 (-0.012004) | 0.489930 / 0.540337 (-0.050408) | 0.595856 / 1.386936 (-0.791080) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#213c72f52ae52b662f967d3218f66c70a3043048 \"CML watermark\")\n", "@albertvillanova thanks for the review. As you prefer for the github CI config. I just took it from @lhoestq's branch when testing hfh==0.14.0. I think it's still relevant for next releases. In any case, I let you handle merging the PR :)", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008371 / 0.011353 (-0.002982) | 0.005210 / 0.011008 (-0.005798) | 0.105639 / 0.038508 (0.067131) | 0.045903 / 0.023109 (0.022794) | 0.391231 / 0.275898 (0.115333) | 0.438824 / 0.323480 (0.115345) | 0.006270 / 0.007986 (-0.001715) | 0.005950 / 0.004328 (0.001621) | 0.079685 / 0.004250 (0.075434) | 0.052121 / 0.037052 (0.015069) | 0.387787 / 0.258489 (0.129298) | 0.434322 / 0.293841 (0.140481) | 0.032598 / 0.128546 (-0.095948) | 0.012126 / 0.075646 (-0.063520) | 0.359658 / 0.419271 (-0.059613) | 0.046686 / 0.043533 (0.003154) | 0.391973 / 0.255139 (0.136834) | 0.421149 / 0.283200 (0.137949) | 0.105920 / 0.141683 (-0.035763) | 1.483008 / 1.452155 (0.030854) | 1.617010 / 1.492716 (0.124294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199111 / 0.018006 (0.181105) | 0.407995 / 0.000490 (0.407505) | 0.006706 / 0.000200 (0.006506) | 0.000229 / 0.000054 (0.000175) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030247 / 0.037411 (-0.007164) | 0.115977 / 0.014526 (0.101451) | 0.118112 / 0.176557 (-0.058444) | 0.182710 / 0.737135 (-0.554426) | 0.122483 / 0.296338 (-0.173855) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430455 / 0.215209 (0.215246) | 4.314298 / 2.077655 (2.236643) | 1.898124 / 1.504120 (0.394005) | 1.734909 / 1.541195 (0.193715) | 1.802400 / 1.468490 (0.333910) | 0.717237 / 4.584777 (-3.867539) | 4.004705 / 3.745712 (0.258993) | 2.138901 / 5.269862 (-3.130960) | 1.254037 / 4.565676 (-3.311640) | 0.085594 / 0.424275 (-0.338681) | 0.013774 / 0.007607 (0.006166) | 0.535218 / 0.226044 (0.309174) | 5.373730 / 2.268929 (3.104801) | 2.371194 / 55.444624 (-53.073430) | 2.111206 / 6.876477 (-4.765270) | 2.225137 / 2.142072 (0.083064) | 0.838325 / 4.805227 (-3.966902) | 0.159176 / 6.500664 (-6.341488) | 0.072285 / 0.075469 (-0.003184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352232 / 1.841788 (-0.489555) | 16.926722 / 8.074308 (8.852414) | 16.709531 / 10.191392 (6.518139) | 0.159249 / 0.680424 (-0.521175) | 0.017667 / 0.534201 (-0.516534) | 0.426894 / 0.579283 (-0.152390) | 0.539903 / 0.434364 (0.105539) | 0.537471 / 0.540337 (-0.002866) | 0.619592 / 1.386936 (-0.767344) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008354 / 0.011353 (-0.002999) | 0.005366 / 0.011008 (-0.005642) | 0.080961 / 0.038508 (0.042453) | 0.046574 / 0.023109 (0.023465) | 0.345949 / 0.275898 (0.070051) | 0.394041 / 0.323480 (0.070562) | 0.006209 / 0.007986 (-0.001777) | 0.005980 / 0.004328 (0.001651) | 0.076235 / 0.004250 (0.071984) | 0.051833 / 0.037052 (0.014780) | 0.348786 / 0.258489 (0.090297) | 0.397421 / 0.293841 (0.103580) | 0.033026 / 0.128546 (-0.095520) | 0.012217 / 0.075646 (-0.063429) | 0.087439 / 0.419271 (-0.331832) | 0.045488 / 0.043533 (0.001955) | 0.352160 / 0.255139 (0.097021) | 0.379079 / 0.283200 (0.095879) | 0.116111 / 0.141683 (-0.025572) | 1.470177 / 1.452155 (0.018022) | 1.587499 / 1.492716 (0.094783) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296149 / 0.018006 (0.278143) | 0.592362 / 0.000490 (0.591872) | 0.000492 / 0.000200 (0.000292) | 0.000064 / 0.000054 (0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036599 / 0.037411 (-0.000813) | 0.113768 / 0.014526 (0.099242) | 0.116198 / 0.176557 (-0.060358) | 0.180329 / 0.737135 (-0.556806) | 0.123942 / 0.296338 (-0.172396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.452445 / 0.215209 (0.237236) | 4.504330 / 2.077655 (2.426675) | 2.275645 / 1.504120 (0.771525) | 2.107765 / 1.541195 (0.566571) | 2.086363 / 1.468490 (0.617873) | 0.723721 / 4.584777 (-3.861056) | 3.825330 / 3.745712 (0.079618) | 2.162743 / 5.269862 (-3.107119) | 1.255953 / 4.565676 (-3.309724) | 0.085860 / 0.424275 (-0.338415) | 0.013790 / 0.007607 (0.006183) | 0.560257 / 0.226044 (0.334213) | 5.618180 / 2.268929 (3.349251) | 2.625423 / 55.444624 (-52.819202) | 2.374381 / 6.876477 (-4.502095) | 2.496560 / 2.142072 (0.354488) | 0.841120 / 4.805227 (-3.964107) | 0.161541 / 6.500664 (-6.339123) | 0.075270 / 0.075469 (-0.000199) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.432916 / 1.841788 (-0.408872) | 14.858534 / 8.074308 (6.784226) | 14.973521 / 10.191392 (4.782129) | 0.148312 / 0.680424 (-0.532112) | 0.016811 / 0.534201 (-0.517390) | 0.382623 / 0.579283 (-0.196660) | 0.389767 / 0.434364 (-0.044596) | 0.449657 / 0.540337 (-0.090680) | 0.533723 / 1.386936 (-0.853214) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f8344350f15265a585188ac986ae49a8ed8289fe \"CML watermark\")\n", "I agree it is good to have a way to run the CI on push, without needing to open a PR.\r\n\r\nBut I think the branch name should be more generic (and this is not specific to this PR). See:\r\n- #5790 ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007208 / 0.011353 (-0.004145) | 0.005600 / 0.011008 (-0.005408) | 0.096129 / 0.038508 (0.057621) | 0.027834 / 0.023109 (0.004725) | 0.295106 / 0.275898 (0.019208) | 0.323983 / 0.323480 (0.000503) | 0.005164 / 0.007986 (-0.002822) | 0.003962 / 0.004328 (-0.000366) | 0.078339 / 0.004250 (0.074089) | 0.036974 / 0.037052 (-0.000078) | 0.310315 / 0.258489 (0.051826) | 0.338036 / 0.293841 (0.044195) | 0.042124 / 0.128546 (-0.086422) | 0.015886 / 0.075646 (-0.059760) | 0.337961 / 0.419271 (-0.081310) | 0.051507 / 0.043533 (0.007974) | 0.297505 / 0.255139 (0.042366) | 0.310728 / 0.283200 (0.027528) | 0.086312 / 0.141683 (-0.055371) | 1.356923 / 1.452155 (-0.095232) | 1.429366 / 1.492716 (-0.063350) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205495 / 0.018006 (0.187489) | 0.460639 / 0.000490 (0.460149) | 0.003996 / 0.000200 (0.003796) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021970 / 0.037411 (-0.015442) | 0.090283 / 0.014526 (0.075757) | 0.098579 / 0.176557 (-0.077978) | 0.160437 / 0.737135 (-0.576699) | 0.102738 / 0.296338 (-0.193600) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494474 / 0.215209 (0.279265) | 4.967453 / 2.077655 (2.889799) | 2.045852 / 1.504120 (0.541732) | 1.858022 / 1.541195 (0.316827) | 1.771874 / 1.468490 (0.303384) | 1.186368 / 4.584777 (-3.398408) | 4.974762 / 3.745712 (1.229050) | 2.616225 / 5.269862 (-2.653636) | 1.702971 / 4.565676 (-2.862705) | 0.124929 / 0.424275 (-0.299346) | 0.011774 / 0.007607 (0.004167) | 0.569643 / 0.226044 (0.343598) | 5.793114 / 2.268929 (3.524186) | 2.441561 / 55.444624 (-53.003064) | 1.862233 / 6.876477 (-5.014243) | 1.931142 / 2.142072 (-0.210931) | 1.148915 / 4.805227 (-3.656313) | 0.203914 / 6.500664 (-6.296750) | 0.062468 / 0.075469 (-0.013001) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188708 / 1.841788 (-0.653080) | 13.710830 / 8.074308 (5.636522) | 15.695153 / 10.191392 (5.503761) | 0.171467 / 0.680424 (-0.508957) | 0.024509 / 0.534201 (-0.509692) | 0.450270 / 0.579283 (-0.129014) | 0.500712 / 0.434364 (0.066348) | 0.488632 / 0.540337 (-0.051706) | 0.574893 / 1.386936 (-0.812043) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007254 / 0.011353 (-0.004099) | 0.006199 / 0.011008 (-0.004809) | 0.072079 / 0.038508 (0.033571) | 0.026909 / 0.023109 (0.003800) | 0.355538 / 0.275898 (0.079640) | 0.358625 / 0.323480 (0.035145) | 0.005564 / 0.007986 (-0.002421) | 0.005278 / 0.004328 (0.000950) | 0.076469 / 0.004250 (0.072219) | 0.038269 / 0.037052 (0.001216) | 0.355214 / 0.258489 (0.096725) | 0.383219 / 0.293841 (0.089378) | 0.046516 / 0.128546 (-0.082030) | 0.015393 / 0.075646 (-0.060254) | 0.088506 / 0.419271 (-0.330765) | 0.050326 / 0.043533 (0.006793) | 0.327265 / 0.255139 (0.072126) | 0.370176 / 0.283200 (0.086976) | 0.102438 / 0.141683 (-0.039245) | 1.378969 / 1.452155 (-0.073186) | 1.441998 / 1.492716 (-0.050719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209044 / 0.018006 (0.191038) | 0.455733 / 0.000490 (0.455243) | 0.005856 / 0.000200 (0.005656) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025336 / 0.037411 (-0.012075) | 0.097449 / 0.014526 (0.082923) | 0.106301 / 0.176557 (-0.070255) | 0.153053 / 0.737135 (-0.584082) | 0.107938 / 0.296338 (-0.188401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491070 / 0.215209 (0.275861) | 5.049637 / 2.077655 (2.971982) | 2.064709 / 1.504120 (0.560589) | 1.782266 / 1.541195 (0.241072) | 1.798570 / 1.468490 (0.330080) | 0.988886 / 4.584777 (-3.595891) | 4.690324 / 3.745712 (0.944612) | 4.317355 / 5.269862 (-0.952507) | 2.347596 / 4.565676 (-2.218081) | 0.117249 / 0.424275 (-0.307026) | 0.011614 / 0.007607 (0.004007) | 0.630033 / 0.226044 (0.403988) | 6.140108 / 2.268929 (3.871180) | 2.638080 / 55.444624 (-52.806545) | 2.133017 / 6.876477 (-4.743459) | 2.123392 / 2.142072 (-0.018680) | 1.178056 / 4.805227 (-3.627171) | 0.209465 / 6.500664 (-6.291199) | 0.063234 / 0.075469 (-0.012235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238089 / 1.841788 (-0.603699) | 14.066866 / 8.074308 (5.992558) | 16.225480 / 10.191392 (6.034088) | 0.206466 / 0.680424 (-0.473958) | 0.027279 / 0.534201 (-0.506922) | 0.443006 / 0.579283 (-0.136277) | 0.509512 / 0.434364 (0.075148) | 0.479075 / 0.540337 (-0.061263) | 0.573546 / 1.386936 (-0.813390) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c6015a070c66a5bbd84603d415ccc57cb668b44b \"CML watermark\")\n" ]
null
[]
Prepare tests for hfh 0.14
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5788/timeline
Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged. See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack). cc @lhoestq
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5788.diff", "html_url": "https://github.com/huggingface/datasets/pull/5788", "merged_at": "2023-04-25T14:25:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/5788.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5788" }
1,681,136,256
https://api.github.com/repos/huggingface/datasets/issues/5788/comments
PR_kwDODunzps5O_v4B
null
5,788
https://api.github.com/repos/huggingface/datasets/issues/5788/events
true
closed
2023-04-24T10:44:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/5787
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5787/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5787/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5787
[]
false
2023-04-27T13:06:01Z
2023-04-27T12:57:28Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think you can revert the last commit - it should fail if data_files={} IMO", "The validation of non-empty data_files is addressed in this PR:\r\n- #5802", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002730) | 0.005970 / 0.011008 (-0.005038) | 0.117797 / 0.038508 (0.079289) | 0.040955 / 0.023109 (0.017846) | 0.419538 / 0.275898 (0.143640) | 0.455816 / 0.323480 (0.132336) | 0.006481 / 0.007986 (-0.001505) | 0.004507 / 0.004328 (0.000178) | 0.089073 / 0.004250 (0.084822) | 0.052389 / 0.037052 (0.015337) | 0.420053 / 0.258489 (0.161564) | 0.466886 / 0.293841 (0.173045) | 0.042660 / 0.128546 (-0.085886) | 0.014673 / 0.075646 (-0.060973) | 0.411229 / 0.419271 (-0.008042) | 0.076993 / 0.043533 (0.033460) | 0.431693 / 0.255139 (0.176554) | 0.446283 / 0.283200 (0.163084) | 0.131408 / 0.141683 (-0.010275) | 1.820339 / 1.452155 (0.368184) | 1.952946 / 1.492716 (0.460230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246543 / 0.018006 (0.228537) | 0.489806 / 0.000490 (0.489317) | 0.013999 / 0.000200 (0.013800) | 0.000323 / 0.000054 (0.000269) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032541 / 0.037411 (-0.004870) | 0.130569 / 0.014526 (0.116043) | 0.139630 / 0.176557 (-0.036926) | 0.217018 / 0.737135 (-0.520118) | 0.147914 / 0.296338 (-0.148425) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.494767 / 0.215209 (0.279558) | 4.949313 / 2.077655 (2.871658) | 2.277023 / 1.504120 (0.772903) | 2.036677 / 1.541195 (0.495482) | 2.064461 / 1.468490 (0.595970) | 0.842484 / 4.584777 (-3.742293) | 4.720646 / 3.745712 (0.974934) | 4.025673 / 5.269862 (-1.244189) | 2.198606 / 4.565676 (-2.367070) | 0.103042 / 0.424275 (-0.321233) | 0.014794 / 0.007607 (0.007187) | 0.617867 / 0.226044 (0.391822) | 6.197146 / 2.268929 (3.928218) | 2.804927 / 55.444624 (-52.639697) | 2.426420 / 6.876477 (-4.450057) | 2.515182 / 2.142072 (0.373109) | 1.008098 / 4.805227 (-3.797129) | 0.204982 / 6.500664 (-6.295682) | 0.078643 / 0.075469 (0.003174) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490790 / 1.841788 (-0.350997) | 17.268042 / 8.074308 (9.193734) | 17.129647 / 10.191392 (6.938255) | 0.170351 / 0.680424 (-0.510073) | 0.021317 / 0.534201 (-0.512884) | 0.517068 / 0.579283 (-0.062215) | 0.500200 / 0.434364 (0.065836) | 0.641974 / 0.540337 (0.101637) | 0.763984 / 1.386936 (-0.622952) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008358 / 0.011353 (-0.002995) | 0.005710 / 0.011008 (-0.005298) | 0.091077 / 0.038508 (0.052569) | 0.040413 / 0.023109 (0.017303) | 0.416634 / 0.275898 (0.140736) | 0.451122 / 0.323480 (0.127642) | 0.006417 / 0.007986 (-0.001569) | 0.004360 / 0.004328 (0.000032) | 0.089543 / 0.004250 (0.085292) | 0.051137 / 0.037052 (0.014085) | 0.420228 / 0.258489 (0.161739) | 0.458649 / 0.293841 (0.164808) | 0.041828 / 0.128546 (-0.086718) | 0.014268 / 0.075646 (-0.061379) | 0.105301 / 0.419271 (-0.313970) | 0.058931 / 0.043533 (0.015398) | 0.413445 / 0.255139 (0.158306) | 0.443882 / 0.283200 (0.160682) | 0.124946 / 0.141683 (-0.016737) | 1.842259 / 1.452155 (0.390104) | 1.948162 / 1.492716 (0.455445) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235799 / 0.018006 (0.217792) | 0.487667 / 0.000490 (0.487177) | 0.001112 / 0.000200 (0.000912) | 0.000094 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034233 / 0.037411 (-0.003178) | 0.136593 / 0.014526 (0.122068) | 0.145598 / 0.176557 (-0.030959) | 0.206545 / 0.737135 (-0.530590) | 0.150781 / 0.296338 (-0.145558) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522345 / 0.215209 (0.307136) | 5.192092 / 2.077655 (3.114438) | 2.543182 / 1.504120 (1.039062) | 2.285212 / 1.541195 (0.744018) | 2.312803 / 1.468490 (0.844313) | 0.859334 / 4.584777 (-3.725443) | 4.620235 / 3.745712 (0.874523) | 3.964060 / 5.269862 (-1.305802) | 2.046347 / 4.565676 (-2.519330) | 0.105284 / 0.424275 (-0.318991) | 0.015051 / 0.007607 (0.007444) | 0.646530 / 0.226044 (0.420485) | 6.386396 / 2.268929 (4.117467) | 3.131833 / 55.444624 (-52.312791) | 2.761898 / 6.876477 (-4.114579) | 2.833216 / 2.142072 (0.691143) | 1.026024 / 4.805227 (-3.779204) | 0.206776 / 6.500664 (-6.293888) | 0.078845 / 0.075469 (0.003376) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.580851 / 1.841788 (-0.260937) | 17.826213 / 8.074308 (9.751905) | 16.929460 / 10.191392 (6.738068) | 0.232483 / 0.680424 (-0.447941) | 0.021123 / 0.534201 (-0.513078) | 0.522196 / 0.579283 (-0.057087) | 0.503495 / 0.434364 (0.069131) | 0.622777 / 0.540337 (0.082440) | 0.753272 / 1.386936 (-0.633664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3f9dfbd93707665132abc862b14bb9b50597b739 \"CML watermark\")\n" ]
null
[]
Fix inferring module for unsupported data files
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5787/timeline
This PR raises a FileNotFoundError instead: ``` FileNotFoundError: No (supported) data files or dataset script found in <dataset_name> ``` Fix #5785.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5787.diff", "html_url": "https://github.com/huggingface/datasets/pull/5787", "merged_at": "2023-04-27T12:57:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5787.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5787" }
1,680,965,959
https://api.github.com/repos/huggingface/datasets/issues/5787/comments
PR_kwDODunzps5O_KNU
null
5,787
https://api.github.com/repos/huggingface/datasets/issues/5787/events
true
closed
2023-04-24T10:38:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/5786
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4", "events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}", "followers_url": "https://api.github.com/users/HugoLaurencon/followers", "following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}", "gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HugoLaurencon", "id": 44556846, "login": "HugoLaurencon", "node_id": "MDQ6VXNlcjQ0NTU2ODQ2", "organizations_url": "https://api.github.com/users/HugoLaurencon/orgs", "received_events_url": "https://api.github.com/users/HugoLaurencon/received_events", "repos_url": "https://api.github.com/users/HugoLaurencon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions", "type": "User", "url": "https://api.github.com/users/HugoLaurencon" }
https://github.com/huggingface/datasets/issues/5786
[]
false
2023-05-30T09:56:30Z
2023-04-24T10:43:58Z
null
[ "Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimport multiprocess.context as ctx\r\nctx._force_start_method('spawn')\r\n```\r\n\r\nAlso make sure to run your main code in `if __name__ == \"__main__\":` to avoid issues with python multiprocesing", "Thanks!", "@lhoestq Hello, I also encountered this problem but maybe with another reason. Here is my code:\r\n```python\r\ntokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir, model_max_length=training_args.model_max_length)\r\ndata = load_dataset(\"json\", data_files=data_args.train_file, cache_dir=data_args.data_cache_dir)\r\ndef func(samples):\r\n # main operation\r\n for sentence_value in samples:\r\n sentence_ids = tokenizer.encode(sentence_value, add_special_tokens=False, max_length=tokenizer.model_max_length, truncation=True)\r\n ... ...\r\ntrain_data = data[\"train\"].shuffle().map(func, num_proc=os.cpu_count())\r\n```\r\nIt hangs after the progress reaches 100%. Could you help me point out the reason?", "@SkyAndCloud your issue doesn't seem related to the original post - could you open a new issue and provide more details ? (size of the dataset, number of cpus, how much time it took to run, `datasets` version)", "@lhoestq Hi, I just solved this problem. Because the input is extremely long and the tokenizer requests a large amount of memory, which leads to a OOM error and may eventually causes the hang. I didn't filter those too-long sentences because I thought `tokenizer` would stop once the length exceeds the `max_length`. However, it actually firstly complete the tokenization of entire sentence and then truncate it." ]
completed
[]
Multiprocessing in a `filter` or `map` function with a Pytorch model
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5786/timeline
### Describe the bug I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method. Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem. However, here, the command hangs without throwing an error. ### Steps to reproduce the bug ``` from datasets import Dataset import torch from torch import nn from torchvision import models ​ ​ class FilterFunction: #__slots__ = ("path_model", "model") # Doesn't change anything uncommented def __init__(self, path_model): self.path_model = path_model model = models.resnet50() model.fc = nn.Sequential( nn.Linear(2048, 512), nn.ReLU(), nn.Dropout(0.2), nn.Linear(512, 10), nn.LogSoftmax(dim=1) ) model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu"))) model.eval() self.model = model def __call__(self, batch): return [True] * len(batch["id"]) # Comment this to have an error def __reduce__(self): return (self.__class__, (self.path_model,)) ​ ​ dataset = Dataset.from_dict({"id": [0, 1, 2, 4]}) ​ # Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth" ​ filter_function = FilterFunction(path_model=path_model) ​ # Works filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2) # Doesn't work filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2) ``` ### Expected behavior The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang. ### Environment info Datasets: 2.11.0 Pyarrow: 11.0.0 Ubuntu
https://api.github.com/repos/huggingface/datasets
null
1,680,957,070
https://api.github.com/repos/huggingface/datasets/issues/5786/comments
I_kwDODunzps5kMV6O
null
5,786
https://api.github.com/repos/huggingface/datasets/issues/5786/events
false
closed
2023-04-24T10:38:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5785
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5785
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-27T12:57:30Z
2023-04-27T12:57:30Z
null
[]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Unsupported data files raise TypeError: 'NoneType' object is not iterable
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5785/timeline
Currently, we raise a TypeError for unsupported data files: ``` TypeError: 'NoneType' object is not iterable ``` See: - https://github.com/huggingface/datasets-server/issues/1073 We should give a more informative error message.
https://api.github.com/repos/huggingface/datasets
null
1,680,956,964
https://api.github.com/repos/huggingface/datasets/issues/5785/comments
I_kwDODunzps5kMV4k
null
5,785
https://api.github.com/repos/huggingface/datasets/issues/5785/events
false
closed
2023-04-24T10:34:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5784
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5784/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5784/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5784
[]
false
2023-04-26T16:04:42Z
2023-04-26T15:54:44Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008959 / 0.011353 (-0.002394) | 0.005804 / 0.011008 (-0.005204) | 0.112663 / 0.038508 (0.074155) | 0.043406 / 0.023109 (0.020297) | 0.348582 / 0.275898 (0.072684) | 0.382332 / 0.323480 (0.058852) | 0.007469 / 0.007986 (-0.000517) | 0.006211 / 0.004328 (0.001883) | 0.086576 / 0.004250 (0.082326) | 0.059223 / 0.037052 (0.022170) | 0.361051 / 0.258489 (0.102562) | 0.411359 / 0.293841 (0.117518) | 0.043640 / 0.128546 (-0.084906) | 0.014239 / 0.075646 (-0.061408) | 0.389729 / 0.419271 (-0.029542) | 0.072319 / 0.043533 (0.028786) | 0.351025 / 0.255139 (0.095886) | 0.371893 / 0.283200 (0.088693) | 0.125994 / 0.141683 (-0.015688) | 1.675249 / 1.452155 (0.223094) | 1.808740 / 1.492716 (0.316024) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255172 / 0.018006 (0.237166) | 0.536003 / 0.000490 (0.535514) | 0.000365 / 0.000200 (0.000165) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031989 / 0.037411 (-0.005423) | 0.126854 / 0.014526 (0.112328) | 0.142458 / 0.176557 (-0.034098) | 0.207821 / 0.737135 (-0.529314) | 0.145610 / 0.296338 (-0.150728) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468924 / 0.215209 (0.253715) | 4.696677 / 2.077655 (2.619023) | 2.183133 / 1.504120 (0.679013) | 1.994219 / 1.541195 (0.453024) | 2.101375 / 1.468490 (0.632885) | 0.827168 / 4.584777 (-3.757609) | 4.710167 / 3.745712 (0.964455) | 2.377062 / 5.269862 (-2.892800) | 1.712245 / 4.565676 (-2.853431) | 0.100620 / 0.424275 (-0.323655) | 0.014302 / 0.007607 (0.006695) | 0.590813 / 0.226044 (0.364769) | 5.871991 / 2.268929 (3.603063) | 2.722229 / 55.444624 (-52.722395) | 2.323585 / 6.876477 (-4.552892) | 2.503289 / 2.142072 (0.361217) | 0.983644 / 4.805227 (-3.821583) | 0.193942 / 6.500664 (-6.306722) | 0.076493 / 0.075469 (0.001024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.463107 / 1.841788 (-0.378681) | 17.876918 / 8.074308 (9.802610) | 16.755740 / 10.191392 (6.564348) | 0.167556 / 0.680424 (-0.512868) | 0.020514 / 0.534201 (-0.513687) | 0.508385 / 0.579283 (-0.070898) | 0.505873 / 0.434364 (0.071509) | 0.603630 / 0.540337 (0.063293) | 0.708856 / 1.386936 (-0.678080) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008504 / 0.011353 (-0.002849) | 0.005894 / 0.011008 (-0.005114) | 0.085523 / 0.038508 (0.047015) | 0.038780 / 0.023109 (0.015671) | 0.402869 / 0.275898 (0.126971) | 0.423819 / 0.323480 (0.100339) | 0.006427 / 0.007986 (-0.001559) | 0.004598 / 0.004328 (0.000269) | 0.079807 / 0.004250 (0.075556) | 0.050852 / 0.037052 (0.013799) | 0.403232 / 0.258489 (0.144743) | 0.452489 / 0.293841 (0.158648) | 0.041501 / 0.128546 (-0.087045) | 0.014996 / 0.075646 (-0.060650) | 0.101548 / 0.419271 (-0.317724) | 0.056993 / 0.043533 (0.013461) | 0.403153 / 0.255139 (0.148014) | 0.424587 / 0.283200 (0.141388) | 0.114507 / 0.141683 (-0.027176) | 1.707098 / 1.452155 (0.254943) | 1.799008 / 1.492716 (0.306291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288003 / 0.018006 (0.269996) | 0.496526 / 0.000490 (0.496036) | 0.010923 / 0.000200 (0.010723) | 0.000159 / 0.000054 (0.000105) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033948 / 0.037411 (-0.003463) | 0.142343 / 0.014526 (0.127817) | 0.143862 / 0.176557 (-0.032695) | 0.202655 / 0.737135 (-0.534480) | 0.151177 / 0.296338 (-0.145162) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508003 / 0.215209 (0.292794) | 5.320394 / 2.077655 (3.242740) | 2.409854 / 1.504120 (0.905734) | 2.190656 / 1.541195 (0.649462) | 2.272171 / 1.468490 (0.803681) | 0.809492 / 4.584777 (-3.775285) | 4.554412 / 3.745712 (0.808699) | 4.413643 / 5.269862 (-0.856218) | 2.374034 / 4.565676 (-2.191642) | 0.099458 / 0.424275 (-0.324817) | 0.014553 / 0.007607 (0.006946) | 0.613916 / 0.226044 (0.387871) | 6.121430 / 2.268929 (3.852502) | 2.945661 / 55.444624 (-52.498964) | 2.595247 / 6.876477 (-4.281230) | 2.734047 / 2.142072 (0.591975) | 0.952217 / 4.805227 (-3.853010) | 0.196933 / 6.500664 (-6.303731) | 0.073391 / 0.075469 (-0.002078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475666 / 1.841788 (-0.366122) | 18.564281 / 8.074308 (10.489973) | 16.865259 / 10.191392 (6.673867) | 0.166494 / 0.680424 (-0.513930) | 0.020655 / 0.534201 (-0.513546) | 0.495120 / 0.579283 (-0.084163) | 0.502602 / 0.434364 (0.068238) | 0.622448 / 0.540337 (0.082110) | 0.721036 / 1.386936 (-0.665900) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#40c204c777793d64e8bb8ce357e9c07b3b303e41 \"CML watermark\")\n", "Whoops mario you're off this week sorry. I'm taking the liberty to merge this one", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009079 / 0.011353 (-0.002274) | 0.005960 / 0.011008 (-0.005049) | 0.116530 / 0.038508 (0.078022) | 0.046649 / 0.023109 (0.023540) | 0.391906 / 0.275898 (0.116008) | 0.438892 / 0.323480 (0.115412) | 0.007134 / 0.007986 (-0.000851) | 0.004997 / 0.004328 (0.000668) | 0.085947 / 0.004250 (0.081697) | 0.059814 / 0.037052 (0.022762) | 0.396423 / 0.258489 (0.137934) | 0.455941 / 0.293841 (0.162100) | 0.042535 / 0.128546 (-0.086011) | 0.014667 / 0.075646 (-0.060980) | 0.402023 / 0.419271 (-0.017249) | 0.060381 / 0.043533 (0.016848) | 0.393829 / 0.255139 (0.138690) | 0.426557 / 0.283200 (0.143358) | 0.131519 / 0.141683 (-0.010163) | 1.758098 / 1.452155 (0.305943) | 1.848194 / 1.492716 (0.355478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236405 / 0.018006 (0.218399) | 0.611442 / 0.000490 (0.610952) | 0.005143 / 0.000200 (0.004943) | 0.000146 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034317 / 0.037411 (-0.003094) | 0.182485 / 0.014526 (0.167959) | 0.183149 / 0.176557 (0.006592) | 0.293592 / 0.737135 (-0.443543) | 0.197137 / 0.296338 (-0.099202) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.475690 / 0.215209 (0.260481) | 4.757344 / 2.077655 (2.679690) | 2.184079 / 1.504120 (0.679959) | 1.956599 / 1.541195 (0.415404) | 2.043041 / 1.468490 (0.574551) | 0.817602 / 4.584777 (-3.767175) | 6.432267 / 3.745712 (2.686555) | 5.999402 / 5.269862 (0.729541) | 3.095970 / 4.565676 (-1.469706) | 0.181589 / 0.424275 (-0.242686) | 0.023286 / 0.007607 (0.015679) | 1.090318 / 0.226044 (0.864274) | 7.919330 / 2.268929 (5.650401) | 2.702821 / 55.444624 (-52.741804) | 2.375442 / 6.876477 (-4.501034) | 2.543075 / 2.142072 (0.401003) | 1.011763 / 4.805227 (-3.793464) | 0.203676 / 6.500664 (-6.296988) | 0.080075 / 0.075469 (0.004606) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.875420 / 1.841788 (0.033632) | 23.059278 / 8.074308 (14.984970) | 19.250807 / 10.191392 (9.059415) | 0.323678 / 0.680424 (-0.356746) | 0.028682 / 0.534201 (-0.505519) | 0.698231 / 0.579283 (0.118948) | 0.668129 / 0.434364 (0.233765) | 0.831218 / 0.540337 (0.290880) | 0.941191 / 1.386936 (-0.445745) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013122 / 0.011353 (0.001769) | 0.006123 / 0.011008 (-0.004886) | 0.090493 / 0.038508 (0.051985) | 0.070660 / 0.023109 (0.047551) | 0.413486 / 0.275898 (0.137588) | 0.450364 / 0.323480 (0.126884) | 0.010288 / 0.007986 (0.002302) | 0.006590 / 0.004328 (0.002261) | 0.087174 / 0.004250 (0.082923) | 0.077304 / 0.037052 (0.040252) | 0.428480 / 0.258489 (0.169991) | 0.459872 / 0.293841 (0.166032) | 0.060477 / 0.128546 (-0.068069) | 0.014859 / 0.075646 (-0.060788) | 0.103915 / 0.419271 (-0.315356) | 0.087466 / 0.043533 (0.043933) | 0.418644 / 0.255139 (0.163505) | 0.433409 / 0.283200 (0.150209) | 0.166716 / 0.141683 (0.025033) | 1.712068 / 1.452155 (0.259914) | 1.827869 / 1.492716 (0.335153) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.372491 / 0.018006 (0.354484) | 0.493426 / 0.000490 (0.492937) | 0.005497 / 0.000200 (0.005297) | 0.000129 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036531 / 0.037411 (-0.000880) | 0.142152 / 0.014526 (0.127626) | 0.148183 / 0.176557 (-0.028373) | 0.212918 / 0.737135 (-0.524217) | 0.154092 / 0.296338 (-0.142246) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.551733 / 0.215209 (0.336524) | 5.421498 / 2.077655 (3.343843) | 2.418848 / 1.504120 (0.914728) | 2.213185 / 1.541195 (0.671991) | 2.294881 / 1.468490 (0.826391) | 0.827031 / 4.584777 (-3.757746) | 6.365622 / 3.745712 (2.619910) | 4.927996 / 5.269862 (-0.341866) | 2.756133 / 4.565676 (-1.809544) | 0.101474 / 0.424275 (-0.322801) | 0.014523 / 0.007607 (0.006916) | 0.619082 / 0.226044 (0.393037) | 6.200132 / 2.268929 (3.931204) | 3.015590 / 55.444624 (-52.429034) | 2.711181 / 6.876477 (-4.165296) | 2.857157 / 2.142072 (0.715084) | 0.993329 / 4.805227 (-3.811898) | 0.203364 / 6.500664 (-6.297301) | 0.079167 / 0.075469 (0.003698) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.709881 / 1.841788 (-0.131907) | 24.867536 / 8.074308 (16.793228) | 21.755361 / 10.191392 (11.563969) | 0.295837 / 0.680424 (-0.384586) | 0.031934 / 0.534201 (-0.502267) | 0.709994 / 0.579283 (0.130711) | 0.779656 / 0.434364 (0.345293) | 0.780669 / 0.540337 (0.240331) | 0.712808 / 1.386936 (-0.674128) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cf4a1951bdca7175adac9c8b85550e89dcceb6fa \"CML watermark\")\n" ]
null
[]
Raise subprocesses traceback when interrupting
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5784/timeline
When a subprocess hangs in `filter` or `map`, one should be able to get the subprocess' traceback when interrupting the main process. Right now it shows nothing. To do so I `.get()` the subprocesses async results even the main process is stopped with e.g. `KeyboardInterrupt`. I added a timeout in case the subprocess is hanging or crashed.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5784.diff", "html_url": "https://github.com/huggingface/datasets/pull/5784", "merged_at": "2023-04-26T15:54:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/5784.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5784" }
1,680,950,726
https://api.github.com/repos/huggingface/datasets/issues/5784/comments
PR_kwDODunzps5O_G9S
null
5,784
https://api.github.com/repos/huggingface/datasets/issues/5784/events
true
open
2023-04-22T19:12:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5783
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5783/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5783/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5066268?v=4", "events_url": "https://api.github.com/users/nishanthcgit/events{/privacy}", "followers_url": "https://api.github.com/users/nishanthcgit/followers", "following_url": "https://api.github.com/users/nishanthcgit/following{/other_user}", "gists_url": "https://api.github.com/users/nishanthcgit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nishanthcgit", "id": 5066268, "login": "nishanthcgit", "node_id": "MDQ6VXNlcjUwNjYyNjg=", "organizations_url": "https://api.github.com/users/nishanthcgit/orgs", "received_events_url": "https://api.github.com/users/nishanthcgit/received_events", "repos_url": "https://api.github.com/users/nishanthcgit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nishanthcgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishanthcgit/subscriptions", "type": "User", "url": "https://api.github.com/users/nishanthcgit" }
https://github.com/huggingface/datasets/issues/5783
[]
false
2023-09-22T06:44:07Z
null
null
[ "Hi! This looks like an Arrow bug, but it can be avoided by reducing the `writer_batch_size`.\r\n\r\n(`ds = ds.map(get_text_caption, writer_batch_size=100)` in Colab runs without issues)\r\n", "@mariosasko I ran into this problem with load_dataset. What should I do", "@AisingioroHao0 You can also pass the `writer_batch_size` parameter to `load_dataset`, e.g., `load_dataset(\"mnist\", writer_batch_size=100)`", "@mariosasko How do I determine the optimal size of write_batch_size? My training is sometimes fast and sometimes slow. Is it because write_batch_size is too small? Each batch of the current dataloader should be the same size. I preprocessed the dataset using map", "@aihao2000 It's unlikely `writer_batch_size` is the problem. You can use the following code to profile the training loop (e.g., on a subset of data) and find slow parts:\r\n```python\r\nimport cProfile, pstats\r\n\r\nwith cProfile.Profile() as profiler:\r\n ... # training loop code\r\n\r\nstats = pstats.Stats(profiler).sort_stats(\"cumtime\")\r\nstats.print_stats()\r\n```\r\n", "@nishanthcgit ok,thanks.Recently I found dataset.with_transform to be faster and more stable with multiple processes", "@mariosasko Is the larger the num_proc of load_dataset within the number of cpu cores, the better? Then the num_proc of data_loader is the number of cpu cores/number of training processes" ]
null
[]
Offset overflow while doing regex on a text column
NONE
https://api.github.com/repos/huggingface/datasets/issues/5783/timeline
### Describe the bug `ArrowInvalid: offset overflow while concatenating arrays` Same error as [here](https://github.com/huggingface/datasets/issues/615) ### Steps to reproduce the bug Steps to reproduce: (dataset is a few GB big so try in colab maybe) ``` import datasets import re ds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train') def get_text_caption(example): regex_pattern = r'\s\d+x\d+|,\sLQ|,\sgrid|\.\w+$' example['text_caption'] = re.sub(regex_pattern, '', example['picture_text']) return example ds = ds.map(get_text_caption) ``` I am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up. ### Expected behavior Dataset should have a new column with processed text ### Environment info Datasets version - 2.11.0
https://api.github.com/repos/huggingface/datasets
null
1,679,664,393
https://api.github.com/repos/huggingface/datasets/issues/5783/comments
I_kwDODunzps5kHaUJ
null
5,783
https://api.github.com/repos/huggingface/datasets/issues/5783/events
false
closed
2023-04-22T17:09:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/5782
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5782/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5782/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/129098876?v=4", "events_url": "https://api.github.com/users/BoringDonut/events{/privacy}", "followers_url": "https://api.github.com/users/BoringDonut/followers", "following_url": "https://api.github.com/users/BoringDonut/following{/other_user}", "gists_url": "https://api.github.com/users/BoringDonut/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BoringDonut", "id": 129098876, "login": "BoringDonut", "node_id": "U_kgDOB7HkfA", "organizations_url": "https://api.github.com/users/BoringDonut/orgs", "received_events_url": "https://api.github.com/users/BoringDonut/received_events", "repos_url": "https://api.github.com/users/BoringDonut/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BoringDonut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BoringDonut/subscriptions", "type": "User", "url": "https://api.github.com/users/BoringDonut" }
https://github.com/huggingface/datasets/issues/5782
[]
false
2023-05-10T20:23:04Z
2023-05-10T20:23:04Z
null
[ "Hi! \r\n\r\nYou can use `set_transform`/`with_transform` to define a custom decoding for audio formats not supported by `soundfile`:\r\n```python\r\naudio_dataset_amr = Dataset.from_dict({\"audio\": [\"audio_samples/audio.amr\"]})\r\n\r\ndef decode_audio(batch):\r\n batch[\"audio\"] = [read_ffmpeg(audio_path) for audio_path in batch[\"audio\"]]\r\n return batch\r\n\r\naudio_dataset_amr.set_transform(decode_amr) \r\n```\r\n\r\nSupporting multiple backends is more work to maintain, but we could consider this if we get more requests such as this one.", "Could it be put somewhere as an example tip or something?", "Considering the number of times a custom decoding transform has been suggested as a solution, an example in the [docs](https://huggingface.co/docs/datasets/process#format-transform) would be nice.\r\n\r\ncc @stevhliu " ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Support for various audio-loading backends instead of always relying on SoundFile
NONE
https://api.github.com/repos/huggingface/datasets/issues/5782/timeline
### Feature request Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option. ### Motivation - The SoundFile library, used in [features/audio.py](https://github.com/huggingface/datasets/blob/649d5a3315f9e7666713b6affe318ee00c7163a0/src/datasets/features/audio.py#L185), supports only a [limited number of audio formats](https://pysoundfile.readthedocs.io/en/latest/index.html?highlight=supported#soundfile.available_formats). - However, current methods for creating audio datasets permit the inclusion of audio files in formats not supported by SoundFile. - As a result, developers may potentially create a dataset they cannot read back. In my most recent project, I dealt with phone call recordings in `.amr` or `.gsm` formats and was genuinely surprised when I couldn't read the dataset I had just packaged a minute prior. Nonetheless, I can still accurately read these files using the librosa library, which employs the audioread library that internally leverages ffmpeg to read such files. Example: ```python audio_dataset_amr = Dataset.from_dict({"audio": ["audio_samples/audio.amr"]}).cast_column("audio", Audio()) audio_dataset_amr.save_to_disk("audio_dataset_amr") audio_dataset_amr = Dataset.load_from_disk("audio_dataset_amr") print(audio_dataset_amr[0]) ``` Results in: ``` Traceback (most recent call last): ... raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f316323e4d0>: Format not recognised. ``` While I acknowledge that support for these rare file types may not be a priority, I believe it's quite unfortunate that it's possible to create an unreadable dataset in this manner. ### Your contribution I've created a [simple demo repository](https://github.com/BoringDonut/hf-datasets-ffmpeg-audio) that highlights the mentioned issue. It demonstrates how to create an .amr dataset that results in an error when attempting to read it just a few lines later. Additionally, I've made a [fork with a rudimentary solution](https://github.com/BoringDonut/datasets/blob/fea73a8fbbc8876467c7e6422c9360546c6372d8/src/datasets/features/audio.py#L189) that utilizes ffmpeg to load files not supported by SoundFile. Here you may see github actions fails to read `.amr` dataset using the version of the current dataset, but will work with the patched version: - https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063785 - https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063829 As evident from the GitHub action above, this solution resolves the previously mentioned problem. I'd be happy to create a proper pull request, provide runtime benchmarks and tests if you could offer some guidance on the following: - Where should I incorporate the ffmpeg (or other backends) code? For example, should I create a new file or simply add a function within the Audio class? - Is it feasible to pass the audio-loading function as an argument within the current architecture? This would be useful if I know in advance that I'll be reading files not supported by SoundFile. A few more notes: - In theory, it's possible to load audio using librosa/audioread since librosa is already expected to be installed. However, librosa [will soon discontinue audioread support](https://github.com/librosa/librosa/blob/aacb4c134002903ae56bbd4b4a330519a5abacc0/librosa/core/audio.py#L227). Moreover, using audioread on its own seems inconvenient because it requires a file [path as input](https://github.com/beetbox/audioread/blob/ff9535df934c48038af7be9617fdebb12078cc07/audioread/__init__.py#L108) and cannot work with bytes already loaded into memory or an open file descriptor (as mentioned in [librosa docs](https://librosa.org/doc/main/generated/librosa.load.html#librosa.load), only SoundFile backend supports an open file descriptor as an input).
https://api.github.com/repos/huggingface/datasets
null
1,679,622,367
https://api.github.com/repos/huggingface/datasets/issues/5782/comments
I_kwDODunzps5kHQDf
null
5,782
https://api.github.com/repos/huggingface/datasets/issues/5782/events
false
closed
2023-04-22T15:10:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/5781
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/61463108?v=4", "events_url": "https://api.github.com/users/gjyoungjr/events{/privacy}", "followers_url": "https://api.github.com/users/gjyoungjr/followers", "following_url": "https://api.github.com/users/gjyoungjr/following{/other_user}", "gists_url": "https://api.github.com/users/gjyoungjr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gjyoungjr", "id": 61463108, "login": "gjyoungjr", "node_id": "MDQ6VXNlcjYxNDYzMTA4", "organizations_url": "https://api.github.com/users/gjyoungjr/orgs", "received_events_url": "https://api.github.com/users/gjyoungjr/received_events", "repos_url": "https://api.github.com/users/gjyoungjr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gjyoungjr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gjyoungjr/subscriptions", "type": "User", "url": "https://api.github.com/users/gjyoungjr" }
https://github.com/huggingface/datasets/issues/5781
[]
false
2023-05-02T23:41:25Z
2023-05-02T23:41:25Z
null
[ "It looks like an issue with your installation of scipy, can you try reinstalling it ?", "Sorry for the late reply, but that worked @lhoestq . Thanks for the assist." ]
completed
[]
Error using `load_datasets`
NONE
https://api.github.com/repos/huggingface/datasets/issues/5781/timeline
### Describe the bug I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error. ``` ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache) ``` ### Steps to reproduce the bug Run the `load_datasets` function ### Expected behavior I expected the dataset to be loaded into my notebook. ### Environment info name: review_sense channels: - apple - conda-forge dependencies: - python=3.8 - pip>=19.0 - jupyter - tensorflow-deps #- scikit-learn #- scipy - pandas - pandas-datareader - matplotlib - pillow - tqdm - requests - h5py - pyyaml - flask - boto3 - ipykernel - seaborn - pip: - tensorflow-macos==2.9 - tensorflow-metal==0.5.0 - bayesian-optimization - gym - kaggle - huggingface_hub - datasets - numpy - huggingface
https://api.github.com/repos/huggingface/datasets
null
1,679,580,460
https://api.github.com/repos/huggingface/datasets/issues/5781/comments
I_kwDODunzps5kHF0s
null
5,781
https://api.github.com/repos/huggingface/datasets/issues/5781/events
false
closed
2023-04-22T06:22:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/5780
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5780/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5780/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/v-yunbin", "id": 38179632, "login": "v-yunbin", "node_id": "MDQ6VXNlcjM4MTc5NjMy", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "repos_url": "https://api.github.com/users/v-yunbin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "type": "User", "url": "https://api.github.com/users/v-yunbin" }
https://github.com/huggingface/datasets/issues/5780
[]
false
2023-04-23T08:49:18Z
2023-04-23T08:49:18Z
null
[]
completed
[]
TypeError: 'NoneType' object does not support item assignment
NONE
https://api.github.com/repos/huggingface/datasets/issues/5780/timeline
command: ``` def load_datasets(formats, data_dir=datadir, data_files=datafileοΌ‰οΌš dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs) return dataset raw_datasets = DatasetDict() raw_datasets["train"] = load_datasets(β€œcsv”, args.datadir, "train.csv", split=train_split) raw_datasets["test"] = load_datasets(β€œcsv”, args.datadir, "dev.csv", split=test_split) raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) ``` error: ``` main() File "peft_adalora_whisper_large_training.py", line 502, in main raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/datasets/dataset_dict.py", line 2015, in cast_column info.features[column] = feature TypeError: 'NoneType' object does not support item assignment ```
https://api.github.com/repos/huggingface/datasets
null
1,679,367,149
https://api.github.com/repos/huggingface/datasets/issues/5780/comments
I_kwDODunzps5kGRvt
null
5,780
https://api.github.com/repos/huggingface/datasets/issues/5780/events
false
closed
2023-04-21T15:04:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/5779
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5779/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5779/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5779
[]
false
2023-04-26T12:20:01Z
2023-04-26T12:11:15Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007490 / 0.011353 (-0.003862) | 0.004957 / 0.011008 (-0.006051) | 0.096952 / 0.038508 (0.058444) | 0.034125 / 0.023109 (0.011016) | 0.301926 / 0.275898 (0.026028) | 0.330538 / 0.323480 (0.007058) | 0.005999 / 0.007986 (-0.001987) | 0.003948 / 0.004328 (-0.000380) | 0.073024 / 0.004250 (0.068773) | 0.050020 / 0.037052 (0.012967) | 0.299987 / 0.258489 (0.041498) | 0.336077 / 0.293841 (0.042237) | 0.035781 / 0.128546 (-0.092765) | 0.012159 / 0.075646 (-0.063487) | 0.333311 / 0.419271 (-0.085960) | 0.059925 / 0.043533 (0.016392) | 0.297772 / 0.255139 (0.042633) | 0.313447 / 0.283200 (0.030247) | 0.100991 / 0.141683 (-0.040692) | 1.472182 / 1.452155 (0.020027) | 1.553010 / 1.492716 (0.060294) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214222 / 0.018006 (0.196216) | 0.441579 / 0.000490 (0.441090) | 0.001030 / 0.000200 (0.000830) | 0.000194 / 0.000054 (0.000140) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026149 / 0.037411 (-0.011262) | 0.107324 / 0.014526 (0.092798) | 0.113390 / 0.176557 (-0.063167) | 0.170282 / 0.737135 (-0.566854) | 0.120601 / 0.296338 (-0.175737) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411795 / 0.215209 (0.196585) | 4.091412 / 2.077655 (2.013757) | 1.819597 / 1.504120 (0.315477) | 1.623413 / 1.541195 (0.082218) | 1.658959 / 1.468490 (0.190469) | 0.697671 / 4.584777 (-3.887106) | 3.868855 / 3.745712 (0.123143) | 3.220448 / 5.269862 (-2.049414) | 1.796472 / 4.565676 (-2.769204) | 0.085817 / 0.424275 (-0.338458) | 0.012422 / 0.007607 (0.004815) | 0.520302 / 0.226044 (0.294258) | 5.062477 / 2.268929 (2.793548) | 2.275065 / 55.444624 (-53.169560) | 1.936717 / 6.876477 (-4.939759) | 2.069924 / 2.142072 (-0.072148) | 0.838964 / 4.805227 (-3.966264) | 0.170632 / 6.500664 (-6.330032) | 0.066011 / 0.075469 (-0.009458) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190673 / 1.841788 (-0.651114) | 14.679478 / 8.074308 (6.605169) | 14.099743 / 10.191392 (3.908351) | 0.142556 / 0.680424 (-0.537868) | 0.017601 / 0.534201 (-0.516600) | 0.421301 / 0.579283 (-0.157982) | 0.418035 / 0.434364 (-0.016329) | 0.503799 / 0.540337 (-0.036539) | 0.588809 / 1.386936 (-0.798127) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007556 / 0.011353 (-0.003797) | 0.005283 / 0.011008 (-0.005725) | 0.075616 / 0.038508 (0.037107) | 0.034127 / 0.023109 (0.011018) | 0.345145 / 0.275898 (0.069247) | 0.377490 / 0.323480 (0.054010) | 0.006532 / 0.007986 (-0.001454) | 0.004145 / 0.004328 (-0.000183) | 0.074724 / 0.004250 (0.070473) | 0.048658 / 0.037052 (0.011605) | 0.339989 / 0.258489 (0.081500) | 0.398240 / 0.293841 (0.104399) | 0.037433 / 0.128546 (-0.091114) | 0.012410 / 0.075646 (-0.063237) | 0.088110 / 0.419271 (-0.331162) | 0.050635 / 0.043533 (0.007103) | 0.351878 / 0.255139 (0.096739) | 0.365707 / 0.283200 (0.082508) | 0.104342 / 0.141683 (-0.037341) | 1.438009 / 1.452155 (-0.014145) | 1.533616 / 1.492716 (0.040900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225570 / 0.018006 (0.207563) | 0.442482 / 0.000490 (0.441992) | 0.000402 / 0.000200 (0.000202) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030348 / 0.037411 (-0.007063) | 0.111402 / 0.014526 (0.096877) | 0.123365 / 0.176557 (-0.053192) | 0.175604 / 0.737135 (-0.561531) | 0.128458 / 0.296338 (-0.167881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426054 / 0.215209 (0.210845) | 4.255050 / 2.077655 (2.177395) | 2.039568 / 1.504120 (0.535448) | 1.856842 / 1.541195 (0.315647) | 1.923792 / 1.468490 (0.455301) | 0.701023 / 4.584777 (-3.883754) | 3.746632 / 3.745712 (0.000920) | 2.055563 / 5.269862 (-3.214298) | 1.308068 / 4.565676 (-3.257608) | 0.085524 / 0.424275 (-0.338751) | 0.012103 / 0.007607 (0.004496) | 0.522929 / 0.226044 (0.296885) | 5.258133 / 2.268929 (2.989205) | 2.458440 / 55.444624 (-52.986185) | 2.141681 / 6.876477 (-4.734796) | 2.258667 / 2.142072 (0.116595) | 0.842533 / 4.805227 (-3.962694) | 0.168089 / 6.500664 (-6.332575) | 0.063707 / 0.075469 (-0.011762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312252 / 1.841788 (-0.529536) | 14.939185 / 8.074308 (6.864877) | 14.479845 / 10.191392 (4.288453) | 0.162557 / 0.680424 (-0.517867) | 0.017660 / 0.534201 (-0.516541) | 0.423261 / 0.579283 (-0.156023) | 0.417693 / 0.434364 (-0.016671) | 0.495440 / 0.540337 (-0.044897) | 0.589932 / 1.386936 (-0.797004) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4e3c86574155961097b367d5cddda5bd13c42b09 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008796 / 0.011353 (-0.002557) | 0.005828 / 0.011008 (-0.005180) | 0.118629 / 0.038508 (0.080121) | 0.042435 / 0.023109 (0.019326) | 0.383780 / 0.275898 (0.107882) | 0.420344 / 0.323480 (0.096864) | 0.006855 / 0.007986 (-0.001130) | 0.006290 / 0.004328 (0.001962) | 0.087160 / 0.004250 (0.082910) | 0.057568 / 0.037052 (0.020516) | 0.378761 / 0.258489 (0.120272) | 0.426496 / 0.293841 (0.132655) | 0.041772 / 0.128546 (-0.086774) | 0.014226 / 0.075646 (-0.061420) | 0.400097 / 0.419271 (-0.019174) | 0.060402 / 0.043533 (0.016870) | 0.381955 / 0.255139 (0.126816) | 0.399110 / 0.283200 (0.115911) | 0.124608 / 0.141683 (-0.017075) | 1.737856 / 1.452155 (0.285702) | 1.829034 / 1.492716 (0.336318) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219941 / 0.018006 (0.201934) | 0.497156 / 0.000490 (0.496666) | 0.005094 / 0.000200 (0.004894) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032144 / 0.037411 (-0.005268) | 0.131782 / 0.014526 (0.117256) | 0.141543 / 0.176557 (-0.035014) | 0.211419 / 0.737135 (-0.525716) | 0.147338 / 0.296338 (-0.149001) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478345 / 0.215209 (0.263136) | 4.749506 / 2.077655 (2.671851) | 2.195794 / 1.504120 (0.691674) | 1.978126 / 1.541195 (0.436932) | 2.059941 / 1.468490 (0.591451) | 0.821959 / 4.584777 (-3.762818) | 5.737479 / 3.745712 (1.991767) | 2.507125 / 5.269862 (-2.762737) | 2.051772 / 4.565676 (-2.513905) | 0.100619 / 0.424275 (-0.323656) | 0.014437 / 0.007607 (0.006830) | 0.599484 / 0.226044 (0.373440) | 5.977579 / 2.268929 (3.708651) | 2.708143 / 55.444624 (-52.736482) | 2.320279 / 6.876477 (-4.556198) | 2.510172 / 2.142072 (0.368100) | 1.006279 / 4.805227 (-3.798948) | 0.199812 / 6.500664 (-6.300853) | 0.077967 / 0.075469 (0.002498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.510171 / 1.841788 (-0.331616) | 21.099446 / 8.074308 (13.025138) | 17.634225 / 10.191392 (7.442833) | 0.223506 / 0.680424 (-0.456918) | 0.023845 / 0.534201 (-0.510356) | 0.613489 / 0.579283 (0.034206) | 0.685735 / 0.434364 (0.251371) | 0.652485 / 0.540337 (0.112148) | 0.734756 / 1.386936 (-0.652180) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008444 / 0.011353 (-0.002909) | 0.005789 / 0.011008 (-0.005220) | 0.088297 / 0.038508 (0.049789) | 0.040847 / 0.023109 (0.017737) | 0.411748 / 0.275898 (0.135850) | 0.452320 / 0.323480 (0.128841) | 0.006689 / 0.007986 (-0.001296) | 0.006029 / 0.004328 (0.001701) | 0.086080 / 0.004250 (0.081830) | 0.053310 / 0.037052 (0.016257) | 0.402568 / 0.258489 (0.144079) | 0.459047 / 0.293841 (0.165206) | 0.041203 / 0.128546 (-0.087343) | 0.014216 / 0.075646 (-0.061431) | 0.102729 / 0.419271 (-0.316543) | 0.057170 / 0.043533 (0.013637) | 0.407137 / 0.255139 (0.151998) | 0.429703 / 0.283200 (0.146503) | 0.123528 / 0.141683 (-0.018155) | 1.690026 / 1.452155 (0.237872) | 1.797793 / 1.492716 (0.305077) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264581 / 0.018006 (0.246575) | 0.498981 / 0.000490 (0.498492) | 0.000462 / 0.000200 (0.000262) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034613 / 0.037411 (-0.002798) | 0.136596 / 0.014526 (0.122070) | 0.142183 / 0.176557 (-0.034374) | 0.201816 / 0.737135 (-0.535320) | 0.148843 / 0.296338 (-0.147496) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.506708 / 0.215209 (0.291499) | 5.042829 / 2.077655 (2.965175) | 2.448414 / 1.504120 (0.944295) | 2.213251 / 1.541195 (0.672056) | 2.255805 / 1.468490 (0.787315) | 0.829929 / 4.584777 (-3.754848) | 5.145717 / 3.745712 (1.400004) | 2.493947 / 5.269862 (-2.775915) | 1.676171 / 4.565676 (-2.889506) | 0.102097 / 0.424275 (-0.322178) | 0.014545 / 0.007607 (0.006938) | 0.635473 / 0.226044 (0.409429) | 6.306767 / 2.268929 (4.037839) | 3.050284 / 55.444624 (-52.394341) | 2.653175 / 6.876477 (-4.223302) | 2.850569 / 2.142072 (0.708496) | 1.355280 / 4.805227 (-3.449947) | 0.248112 / 6.500664 (-6.252552) | 0.091993 / 0.075469 (0.016524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.837509 / 1.841788 (-0.004279) | 21.268838 / 8.074308 (13.194530) | 17.338053 / 10.191392 (7.146660) | 0.232263 / 0.680424 (-0.448161) | 0.029093 / 0.534201 (-0.505108) | 0.651056 / 0.579283 (0.071773) | 0.617623 / 0.434364 (0.183259) | 0.773921 / 0.540337 (0.233584) | 0.705118 / 1.386936 (-0.681818) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#35846fd54fa16aa72ff344d15c98b5e08c5effe0 \"CML watermark\")\n" ]
null
[]
Call fs.makedirs in save_to_disk
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5779/timeline
We need to call `fs.makedirs` when saving a dataset using `save_to_disk`, because some fs implementations have actual directories (S3 and others don't) Close https://github.com/huggingface/datasets/issues/5775
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5779.diff", "html_url": "https://github.com/huggingface/datasets/pull/5779", "merged_at": "2023-04-26T12:11:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/5779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5779" }
1,678,669,865
https://api.github.com/repos/huggingface/datasets/issues/5779/comments
PR_kwDODunzps5O3sHp
null
5,779
https://api.github.com/repos/huggingface/datasets/issues/5779/events
true
closed
2023-04-21T08:38:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/5778
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5778/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5778/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/902005?v=4", "events_url": "https://api.github.com/users/liujuncn/events{/privacy}", "followers_url": "https://api.github.com/users/liujuncn/followers", "following_url": "https://api.github.com/users/liujuncn/following{/other_user}", "gists_url": "https://api.github.com/users/liujuncn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/liujuncn", "id": 902005, "login": "liujuncn", "node_id": "MDQ6VXNlcjkwMjAwNQ==", "organizations_url": "https://api.github.com/users/liujuncn/orgs", "received_events_url": "https://api.github.com/users/liujuncn/received_events", "repos_url": "https://api.github.com/users/liujuncn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/liujuncn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liujuncn/subscriptions", "type": "User", "url": "https://api.github.com/users/liujuncn" }
https://github.com/huggingface/datasets/issues/5778
[]
false
2023-07-24T15:15:14Z
2023-07-24T15:15:14Z
null
[ "Hi ! Passing `data_files=\"path/test.json\"` is equivalent to `data_files={\"train\": [\"path/test.json\"]}`, that's why you end up with a train split. If you don't pass `data_files=`, then split names are inferred from the data files names" ]
completed
[]
SchrΓΆdinger's dataset_dict
NONE
https://api.github.com/repos/huggingface/datasets/issues/5778/timeline
### Describe the bug If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}). And if you use load_dataset("path"), it will return DatasetDict({test:...}). Why can't the output behavior be unified? ### Steps to reproduce the bug as description above. ### Expected behavior consistent predictable output. ### Environment info '2.11.0'
https://api.github.com/repos/huggingface/datasets
null
1,678,125,951
https://api.github.com/repos/huggingface/datasets/issues/5778/comments
I_kwDODunzps5kBit_
null
5,778
https://api.github.com/repos/huggingface/datasets/issues/5778/events
false
closed
2023-04-21T02:08:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/5777
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5777/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5777/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4", "events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}", "followers_url": "https://api.github.com/users/jason-brian-anderson/followers", "following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}", "gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jason-brian-anderson", "id": 34688597, "login": "jason-brian-anderson", "node_id": "MDQ6VXNlcjM0Njg4NTk3", "organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs", "received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events", "repos_url": "https://api.github.com/users/jason-brian-anderson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions", "type": "User", "url": "https://api.github.com/users/jason-brian-anderson" }
https://github.com/huggingface/datasets/issues/5777
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-06-05T05:49:52Z
2023-05-11T11:51:56Z
null
[ "Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")", "Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet](https://github.com/github/CodeSearchNet) repo has been archived (Apr 11, 2023) and their source data files are no longer accessible in their S3: e.g. https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip gives 403 Forbidden error. See:\r\n- https://huggingface.co/datasets/code_search_net/discussions/3\r\n\r\nWe have contacted one of the authors of the dataset to find a solution. I'll keep you informed.\r\n\r\nCC: @hamelsmu", "cc: @julianeagu", "This issue is fixed because we are hosting the CodeSearchNet data files in the Hugging Face Hub. See: https://huggingface.co/datasets/code_search_net/discussions/7", "I learned that @mallamanis has uploaded the dataset [here as well](https://zenodo.org/record/7908468) ", "Thanks @hamelsmu for the Zenodo link. I am adding it to the dataset card on the Hugging Face Hub, so that the community knows about this \"official\" source data. I guess this link is not well known, because some community members already hosted an \"unofficial\" version of the data on Zenodo as well: https://zenodo.org/record/7857872\r\n\r\n" ]
completed
[]
datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory
NONE
https://api.github.com/repos/huggingface/datasets/issues/5777/timeline
### Describe the bug While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples. The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb#scrollTo=hGb69Yo3eV8S) ``` from datasets import load_dataset import os os.environ["HF_DATASETS_CACHE"] = "/workspace" # This can take a few minutes to load, so grab a coffee or tea while you wait! raw_datasets = load_dataset("code_search_net", "python") ``` yeilds: ``` ile /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:524, in xlistdir(path, use_auth_token) 522 main_hop, *rest_hops = _as_str(path).split("::") 523 if is_local_path(main_hop): --> 524 return os.listdir(path) 525 else: 526 # globbing inside a zip in a private repo requires authentication 527 if not rest_hops and (main_hop.startswith("http://") or main_hop.startswith("https://")): NotADirectoryError: [Errno 20] Not a directory: '/workspace/downloads/25ceeb4c25ab737d688bd56ea92bfbb1f199fe572470456cf2d675479f342ac7/python/final/jsonl/train' ``` I was able to reproduce this erro both in the collab and on my own pytorch/pytorch container pulled from the dockerhub official pytorch image, so i think it may be a server side thing. ### Steps to reproduce the bug Steps to reproduce the issue: 1. run `raw_datasets = load_dataset("code_search_net", "python")` ### Expected behavior expect the code to not exception during dataset pull. ### Environment info i tried both the default HF_DATASETS_CACHE on Collab, and on my local container. i then pointed to the HF_DATASETS_CACHE to a large capacity local storage and the problem was consisten across all 3 scenarios.
https://api.github.com/repos/huggingface/datasets
null
1,677,655,969
https://api.github.com/repos/huggingface/datasets/issues/5777/comments
I_kwDODunzps5j_v-h
null
5,777
https://api.github.com/repos/huggingface/datasets/issues/5777/events
false
open
2023-04-20T17:15:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/5776
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/issues/5776
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
false
2023-04-20T17:15:49Z
null
null
[]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Use Pandas' `read_json` in the JSON builder
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5776/timeline
Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725). In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed.
https://api.github.com/repos/huggingface/datasets
null
1,677,116,100
https://api.github.com/repos/huggingface/datasets/issues/5776/comments
I_kwDODunzps5j9sLE
null
5,776
https://api.github.com/repos/huggingface/datasets/issues/5776/events
false
closed
2023-04-20T16:58:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/5775
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5775/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5775/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29817738?v=4", "events_url": "https://api.github.com/users/Zoupers/events{/privacy}", "followers_url": "https://api.github.com/users/Zoupers/followers", "following_url": "https://api.github.com/users/Zoupers/following{/other_user}", "gists_url": "https://api.github.com/users/Zoupers/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Zoupers", "id": 29817738, "login": "Zoupers", "node_id": "MDQ6VXNlcjI5ODE3NzM4", "organizations_url": "https://api.github.com/users/Zoupers/orgs", "received_events_url": "https://api.github.com/users/Zoupers/received_events", "repos_url": "https://api.github.com/users/Zoupers/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Zoupers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zoupers/subscriptions", "type": "User", "url": "https://api.github.com/users/Zoupers" }
https://github.com/huggingface/datasets/issues/5775
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2023-04-26T12:11:36Z
2023-04-26T12:11:17Z
null
[ "We just fixed this on `main` and will do a new release soon :)" ]
completed
[]
ArrowDataset.save_to_disk lost some logic of remote
NONE
https://api.github.com/repos/huggingface/datasets/issues/5775/timeline
### Describe the bug https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371 Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there is no guarantee that there exists a directory name `train` under `dataset_dict_path`. ### Steps to reproduce the bug 1. Mock a DatasetDict with items like what I said. 2. using save_to_disk with storage_options, u can use local sftp. code may like below ```python from datasets import load_dataset dataset = load_dataset(...) dataset.save_to_disk('sftp:///tmp', storage_options={'host': 'localhost', 'username': 'admin'}) ``` I suppose u can reproduce the bug by these steps. ### Expected behavior Should create the folder if it does not exists, just like we do locally. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-6.2.10-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.13.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,677,089,901
https://api.github.com/repos/huggingface/datasets/issues/5775/comments
I_kwDODunzps5j9lxt
null
5,775
https://api.github.com/repos/huggingface/datasets/issues/5775/events
false
closed
2023-04-20T13:21:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/5774
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/5774
[]
false
2023-04-20T13:34:26Z
2023-04-20T13:24:28Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d34c7968ea1a3fe7d4fa7cdf23673e0354f69ac \"CML watermark\")\n" ]
null
[]
Fix style
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5774/timeline
Fix C419 issues
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5774.diff", "html_url": "https://github.com/huggingface/datasets/pull/5774", "merged_at": "2023-04-20T13:24:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/5774.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5774" }
1,676,716,662
https://api.github.com/repos/huggingface/datasets/issues/5774/comments
PR_kwDODunzps5OxIMe
null
5,774
https://api.github.com/repos/huggingface/datasets/issues/5774/events
true
open
2023-04-20T04:37:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/5773
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5773/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5773/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/v-yunbin", "id": 38179632, "login": "v-yunbin", "node_id": "MDQ6VXNlcjM4MTc5NjMy", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "repos_url": "https://api.github.com/users/v-yunbin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "type": "User", "url": "https://api.github.com/users/v-yunbin" }
https://github.com/huggingface/datasets/issues/5773
[]
false
2023-07-19T20:33:13Z
null
null
[ "Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?", "this is a detail error info from transformers:\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 177, in <module>\r\n whisper_finetune(traindir,devdir,outdir)\r\n File \"finetune.py\", line 161, in whisper_finetune\r\n trainer = Seq2SeqTrainer(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer_seq2seq.py\", line 56, in __init__\r\n super().__init__(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py\", line 567, in __init__\r\n raise ValueError(\r\nValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.\r\n```\r\n", "How did you create `train_dataset`? The `datasets` library does not appear in your stack trace.\r\n\r\nWe need more information in order to reproduce the issue...", "```\r\ndef asr_dataset(traindir,devdir):\r\n we_voice = IterableDatasetDict()\r\n #we_voice[\"train\"] = load_from_disk(traindir,streaming=True)\r\n #we_voice[\"test\"]= load_from_disk(devdir,streaming=True)\r\n we_voice[\"train\"] = load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\",streaming=True)\r\n #print(load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\"))\r\n we_voice[\"test\"] = load_dataset(\"csv\",data_files=os.path.join(devdir,\"dev.csv\"), split=\"train\",streaming=True)\r\n we_voice = we_voice.remove_columns([\"id\"])\r\n we_voice = we_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n return we_voice\r\n\r\n```", "As you are using iterable datasets (`streaming=True`), their length is not defined.\r\n\r\nYou should:\r\n- Either use non-iterable datasets, which have a defined length: use `DatasetDict` and not passing `streaming=True`\r\n- Or pass `args.max_steps` to the `Trainer`", "I don't know how to give a reasonable args.max_steps...........................", "Then you should not use streaming.", "@albertvillanova I think @v-yunbin, myself, and others might be slightly confused about max_steps and how it differs from num_train_epochs.", "@lkurlandski A **step** is referring to optimizer's update after back propagation, and it's associated with a batch of data. For example, if a dataset contains 64 examples and you have an overall batch size of 4, then an epoch will have 64/4=16 batches. Therefore, in one epoch, you will have 16 optimizer **steps**." ]
null
[]
train_dataset does not implement __len__
NONE
https://api.github.com/repos/huggingface/datasets/issues/5773/timeline
when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers: `ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.`
https://api.github.com/repos/huggingface/datasets
null
1,675,984,633
https://api.github.com/repos/huggingface/datasets/issues/5773/comments
I_kwDODunzps5j5X75
null
5,773
https://api.github.com/repos/huggingface/datasets/issues/5773/events
false
closed
2023-04-19T14:32:57Z
null
https://api.github.com/repos/huggingface/datasets/issues/5772
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5772
[]
false
2023-04-21T06:45:13Z
2023-04-21T06:35:27Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009262 / 0.011353 (-0.002091) | 0.006157 / 0.011008 (-0.004851) | 0.125960 / 0.038508 (0.087451) | 0.036213 / 0.023109 (0.013104) | 0.399331 / 0.275898 (0.123433) | 0.453597 / 0.323480 (0.130117) | 0.006990 / 0.007986 (-0.000995) | 0.007320 / 0.004328 (0.002991) | 0.100321 / 0.004250 (0.096070) | 0.048870 / 0.037052 (0.011818) | 0.396284 / 0.258489 (0.137795) | 0.475619 / 0.293841 (0.181778) | 0.052329 / 0.128546 (-0.076217) | 0.019564 / 0.075646 (-0.056083) | 0.430942 / 0.419271 (0.011670) | 0.063224 / 0.043533 (0.019692) | 0.391717 / 0.255139 (0.136578) | 0.448342 / 0.283200 (0.165142) | 0.114055 / 0.141683 (-0.027628) | 1.793204 / 1.452155 (0.341049) | 1.895151 / 1.492716 (0.402435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283699 / 0.018006 (0.265693) | 0.597194 / 0.000490 (0.596704) | 0.007143 / 0.000200 (0.006944) | 0.000602 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034761 / 0.037411 (-0.002651) | 0.124555 / 0.014526 (0.110030) | 0.149126 / 0.176557 (-0.027430) | 0.220335 / 0.737135 (-0.516801) | 0.153109 / 0.296338 (-0.143229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620210 / 0.215209 (0.405001) | 6.229937 / 2.077655 (4.152282) | 2.615203 / 1.504120 (1.111083) | 2.239337 / 1.541195 (0.698143) | 2.262138 / 1.468490 (0.793648) | 1.196498 / 4.584777 (-3.388279) | 5.609932 / 3.745712 (1.864220) | 3.031347 / 5.269862 (-2.238515) | 2.025530 / 4.565676 (-2.540146) | 0.139828 / 0.424275 (-0.284447) | 0.015476 / 0.007607 (0.007869) | 0.768964 / 0.226044 (0.542920) | 7.728677 / 2.268929 (5.459748) | 3.336407 / 55.444624 (-52.108217) | 2.700055 / 6.876477 (-4.176422) | 2.765223 / 2.142072 (0.623151) | 1.409073 / 4.805227 (-3.396155) | 0.246849 / 6.500664 (-6.253815) | 0.081231 / 0.075469 (0.005762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.593836 / 1.841788 (-0.247952) | 18.020525 / 8.074308 (9.946216) | 21.766822 / 10.191392 (11.575430) | 0.258615 / 0.680424 (-0.421809) | 0.026895 / 0.534201 (-0.507306) | 0.529823 / 0.579283 (-0.049460) | 0.623470 / 0.434364 (0.189106) | 0.628171 / 0.540337 (0.087833) | 0.745249 / 1.386936 (-0.641687) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008624 / 0.011353 (-0.002729) | 0.006317 / 0.011008 (-0.004691) | 0.097315 / 0.038508 (0.058807) | 0.035217 / 0.023109 (0.012108) | 0.440197 / 0.275898 (0.164299) | 0.473863 / 0.323480 (0.150383) | 0.006722 / 0.007986 (-0.001264) | 0.006444 / 0.004328 (0.002116) | 0.102056 / 0.004250 (0.097806) | 0.047142 / 0.037052 (0.010089) | 0.452476 / 0.258489 (0.193986) | 0.487619 / 0.293841 (0.193778) | 0.052456 / 0.128546 (-0.076090) | 0.018735 / 0.075646 (-0.056911) | 0.114656 / 0.419271 (-0.304616) | 0.062577 / 0.043533 (0.019044) | 0.444471 / 0.255139 (0.189332) | 0.494264 / 0.283200 (0.211065) | 0.117112 / 0.141683 (-0.024571) | 1.848965 / 1.452155 (0.396810) | 1.984008 / 1.492716 (0.491292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290494 / 0.018006 (0.272488) | 0.588415 / 0.000490 (0.587925) | 0.000459 / 0.000200 (0.000259) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004538) | 0.131139 / 0.014526 (0.116614) | 0.140268 / 0.176557 (-0.036289) | 0.204561 / 0.737135 (-0.532574) | 0.147443 / 0.296338 (-0.148895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636899 / 0.215209 (0.421690) | 6.236139 / 2.077655 (4.158484) | 2.801468 / 1.504120 (1.297348) | 2.398808 / 1.541195 (0.857613) | 2.493150 / 1.468490 (1.024659) | 1.228845 / 4.584777 (-3.355932) | 5.675874 / 3.745712 (1.930162) | 3.084939 / 5.269862 (-2.184922) | 2.061310 / 4.565676 (-2.504367) | 0.142285 / 0.424275 (-0.281990) | 0.014972 / 0.007607 (0.007365) | 0.786599 / 0.226044 (0.560555) | 7.876036 / 2.268929 (5.607107) | 3.476136 / 55.444624 (-51.968489) | 2.847922 / 6.876477 (-4.028555) | 3.040326 / 2.142072 (0.898253) | 1.448538 / 4.805227 (-3.356690) | 0.257230 / 6.500664 (-6.243434) | 0.085137 / 0.075469 (0.009668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.668173 / 1.841788 (-0.173615) | 18.668520 / 8.074308 (10.594212) | 20.535542 / 10.191392 (10.344150) | 0.244580 / 0.680424 (-0.435844) | 0.026364 / 0.534201 (-0.507837) | 0.531753 / 0.579283 (-0.047530) | 0.616578 / 0.434364 (0.182214) | 0.618906 / 0.540337 (0.078569) | 0.738785 / 1.386936 (-0.648151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7265cafa3103d77d6d52aa897088faefcd96659 \"CML watermark\")\n" ]
null
[]
Fix JSON builder when missing keys in first row
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5772/timeline
Until now, the JSON builder only considered the keys present in the first element of the list: - Either explicitly: by passing index 0 in `dataset[0].keys()` - Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values" This PR fixes the bug by considering the union of the keys present in all the rows. Fix #5726.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5772.diff", "html_url": "https://github.com/huggingface/datasets/pull/5772", "merged_at": "2023-04-21T06:35:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/5772.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5772" }
1,675,033,510
https://api.github.com/repos/huggingface/datasets/issues/5772/comments
PR_kwDODunzps5OreXV
null
5,772
https://api.github.com/repos/huggingface/datasets/issues/5772/events
true
closed
2023-04-19T12:43:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/5771
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
https://github.com/huggingface/datasets/issues/5771
[]
false
2023-05-07T17:47:41Z
2023-05-07T17:47:41Z
null
[ "A duplicate of https://github.com/huggingface/datasets/issues/5281" ]
completed
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Support cloud storage for loading datasets
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5771/timeline
### Feature request It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`. ### Motivation Motivation is pretty clear -- let users work with datasets located in the cloud. ### Your contribution I can help implementing this.
https://api.github.com/repos/huggingface/datasets
null
1,674,828,380
https://api.github.com/repos/huggingface/datasets/issues/5771/comments
I_kwDODunzps5j09pc
null
5,771
https://api.github.com/repos/huggingface/datasets/issues/5771/events
false
closed
2023-04-18T17:47:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/5770
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
https://github.com/huggingface/datasets/pull/5770
[]
false
2023-05-17T14:07:32Z
2023-05-17T14:00:38Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...", "Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it can be more intuitive IMO :)", "Thanks for reviewing! I moved the streaming behavior to IterableDataset.from_spark", "Thanks Quentin! I'll flesh out the docs in a follow-up PR", "Friendly ping @lhoestq ", "Thanks @lhoestq ! I fixed the partition order thing and added more unit tests.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006165 / 0.011353 (-0.005188) | 0.004497 / 0.011008 (-0.006511) | 0.099142 / 0.038508 (0.060634) | 0.027479 / 0.023109 (0.004369) | 0.352491 / 0.275898 (0.076593) | 0.402993 / 0.323480 (0.079513) | 0.004885 / 0.007986 (-0.003100) | 0.003315 / 0.004328 (-0.001013) | 0.075787 / 0.004250 (0.071537) | 0.035320 / 0.037052 (-0.001732) | 0.368401 / 0.258489 (0.109912) | 0.409090 / 0.293841 (0.115249) | 0.030125 / 0.128546 (-0.098421) | 0.011670 / 0.075646 (-0.063976) | 0.324381 / 0.419271 (-0.094890) | 0.050815 / 0.043533 (0.007283) | 0.352598 / 0.255139 (0.097460) | 0.389189 / 0.283200 (0.105989) | 0.092873 / 0.141683 (-0.048810) | 1.485140 / 1.452155 (0.032986) | 1.545586 / 1.492716 (0.052869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199522 / 0.018006 (0.181516) | 0.404576 / 0.000490 (0.404087) | 0.003322 / 0.000200 (0.003122) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022945 / 0.037411 (-0.014466) | 0.095512 / 0.014526 (0.080987) | 0.103077 / 0.176557 (-0.073480) | 0.163918 / 0.737135 (-0.573217) | 0.105560 / 0.296338 (-0.190779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417360 / 0.215209 (0.202151) | 4.161693 / 2.077655 (2.084039) | 1.851941 / 1.504120 (0.347821) | 1.649872 / 1.541195 (0.108677) | 1.682099 / 1.468490 (0.213609) | 0.693187 / 4.584777 (-3.891590) | 3.462528 / 3.745712 (-0.283184) | 1.839893 / 5.269862 (-3.429968) | 1.155945 / 4.565676 (-3.409731) | 0.082611 / 0.424275 (-0.341664) | 0.012076 / 0.007607 (0.004469) | 0.514325 / 0.226044 (0.288280) | 5.155052 / 2.268929 (2.886123) | 2.307280 / 55.444624 (-53.137345) | 1.966483 / 6.876477 (-4.909994) | 2.018892 / 2.142072 (-0.123181) | 0.803068 / 4.805227 (-4.002159) | 0.152213 / 6.500664 (-6.348451) | 0.066320 / 0.075469 (-0.009149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218578 / 1.841788 (-0.623209) | 13.563869 / 8.074308 (5.489561) | 13.954596 / 10.191392 (3.763204) | 0.151527 / 0.680424 (-0.528897) | 0.016655 / 0.534201 (-0.517546) | 0.380637 / 0.579283 (-0.198646) | 0.395854 / 0.434364 (-0.038509) | 0.459111 / 0.540337 (-0.081226) | 0.560219 / 1.386936 (-0.826717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006427 / 0.011353 (-0.004926) | 0.004728 / 0.011008 (-0.006280) | 0.080525 / 0.038508 (0.042017) | 0.027294 / 0.023109 (0.004185) | 0.414688 / 0.275898 (0.138790) | 0.449882 / 0.323480 (0.126402) | 0.004771 / 0.007986 (-0.003214) | 0.003402 / 0.004328 (-0.000926) | 0.078748 / 0.004250 (0.074497) | 0.037046 / 0.037052 (-0.000007) | 0.417398 / 0.258489 (0.158909) | 0.462921 / 0.293841 (0.169080) | 0.030364 / 0.128546 (-0.098182) | 0.011810 / 0.075646 (-0.063837) | 0.089787 / 0.419271 (-0.329485) | 0.039806 / 0.043533 (-0.003727) | 0.403401 / 0.255139 (0.148262) | 0.439477 / 0.283200 (0.156278) | 0.088431 / 0.141683 (-0.053252) | 1.534373 / 1.452155 (0.082219) | 1.592316 / 1.492716 (0.099600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217701 / 0.018006 (0.199695) | 0.384770 / 0.000490 (0.384280) | 0.000437 / 0.000200 (0.000237) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024952 / 0.037411 (-0.012459) | 0.098728 / 0.014526 (0.084202) | 0.106324 / 0.176557 (-0.070233) | 0.155484 / 0.737135 (-0.581651) | 0.109503 / 0.296338 (-0.186836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450639 / 0.215209 (0.235430) | 4.523110 / 2.077655 (2.445455) | 2.224810 / 1.504120 (0.720690) | 2.119516 / 1.541195 (0.578321) | 2.225192 / 1.468490 (0.756702) | 0.695397 / 4.584777 (-3.889380) | 3.433559 / 3.745712 (-0.312153) | 2.633127 / 5.269862 (-2.636735) | 1.448471 / 4.565676 (-3.117206) | 0.082262 / 0.424275 (-0.342013) | 0.012246 / 0.007607 (0.004639) | 0.561243 / 0.226044 (0.335199) | 5.652711 / 2.268929 (3.383782) | 2.689771 / 55.444624 (-52.754853) | 2.359512 / 6.876477 (-4.516965) | 2.471098 / 2.142072 (0.329026) | 0.802955 / 4.805227 (-4.002272) | 0.151142 / 6.500664 (-6.349522) | 0.067494 / 0.075469 (-0.007975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306879 / 1.841788 (-0.534909) | 14.030775 / 8.074308 (5.956467) | 12.917790 / 10.191392 (2.726398) | 0.141269 / 0.680424 (-0.539155) | 0.016264 / 0.534201 (-0.517937) | 0.411957 / 0.579283 (-0.167326) | 0.393235 / 0.434364 (-0.041129) | 0.505144 / 0.540337 (-0.035193) | 0.590660 / 1.386936 (-0.796276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7790ebd7072eafff755fb575b392f3efa74069e4 \"CML watermark\")\n" ]
null
[]
Add IterableDataset.from_spark
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5770/timeline
Follow-up from https://github.com/huggingface/datasets/pull/5701 Related issue: https://github.com/huggingface/datasets/issues/5678
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5770.diff", "html_url": "https://github.com/huggingface/datasets/pull/5770", "merged_at": "2023-05-17T14:00:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/5770.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5770" }
1,673,581,555
https://api.github.com/repos/huggingface/datasets/issues/5770/comments
PR_kwDODunzps5OmntV
null
5,770
https://api.github.com/repos/huggingface/datasets/issues/5770/events
true
closed
2023-04-18T16:07:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/5769
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4", "events_url": "https://api.github.com/users/markovalexander/events{/privacy}", "followers_url": "https://api.github.com/users/markovalexander/followers", "following_url": "https://api.github.com/users/markovalexander/following{/other_user}", "gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/markovalexander", "id": 22663468, "login": "markovalexander", "node_id": "MDQ6VXNlcjIyNjYzNDY4", "organizations_url": "https://api.github.com/users/markovalexander/orgs", "received_events_url": "https://api.github.com/users/markovalexander/received_events", "repos_url": "https://api.github.com/users/markovalexander/repos", "site_admin": false, "starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions", "type": "User", "url": "https://api.github.com/users/markovalexander" }
https://github.com/huggingface/datasets/issues/5769
[]
false
2023-05-04T18:55:57Z
2023-05-04T18:55:57Z
null
[ "Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?" ]
completed
[]
Tiktoken tokenizers are not pickable
NONE
https://api.github.com/repos/huggingface/datasets/issues/5769/timeline
### Describe the bug Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object` ### Steps to reproduce the bug ``` from datasets import load_dataset import tiktoken dataset = load_dataset("stas/openwebtext-10k") enc = tiktoken.get_encoding("gpt2") tokenized = dataset.map( process, remove_columns=['text'], desc="tokenizing the OWT splits", num_proc=2, ) def process(example): ids = enc.encode(example['text']) ids.append(enc.eot_token) out = {'ids': ids, 'len': len(ids)} return out ``` ### Expected behavior starts processing dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 2.0.0
https://api.github.com/repos/huggingface/datasets
null
1,673,441,182
https://api.github.com/repos/huggingface/datasets/issues/5769/comments
I_kwDODunzps5jvq-e
null
5,769
https://api.github.com/repos/huggingface/datasets/issues/5769/events
false
closed
2023-04-18T07:10:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/5768
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4", "events_url": "https://api.github.com/users/yaseen157/events{/privacy}", "followers_url": "https://api.github.com/users/yaseen157/followers", "following_url": "https://api.github.com/users/yaseen157/following{/other_user}", "gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yaseen157", "id": 57412770, "login": "yaseen157", "node_id": "MDQ6VXNlcjU3NDEyNzcw", "organizations_url": "https://api.github.com/users/yaseen157/orgs", "received_events_url": "https://api.github.com/users/yaseen157/received_events", "repos_url": "https://api.github.com/users/yaseen157/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions", "type": "User", "url": "https://api.github.com/users/yaseen157" }
https://github.com/huggingface/datasets/issues/5768
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-20T10:27:23Z
2023-04-20T10:27:22Z
null
[ "Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?", "I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```", "I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ\r\nβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ\r\nβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?", "I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n", "I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```", "Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/", "Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?", "Thanks for your detailed feedback which for sure will be useful to other community members." ]
completed
[]
load_dataset("squad") doesn't work in 2.7.1 and 2.10.1
NONE
https://api.github.com/repos/huggingface/datasets/issues/5768/timeline
### Describe the bug There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly. This is not a problem with "squad_v2" dataset for example. ### Steps to reproduce the bug cmd line > $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])" OR Python IDE > from datasets import load_dataset > load_dataset("squad") ### Expected behavior I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError. There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this. ### Environment info datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
https://api.github.com/repos/huggingface/datasets
null
1,672,494,561
https://api.github.com/repos/huggingface/datasets/issues/5768/comments
I_kwDODunzps5jsD3h
null
5,768
https://api.github.com/repos/huggingface/datasets/issues/5768/events
false
closed
2023-04-18T06:25:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/5767
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii" }
https://github.com/huggingface/datasets/issues/5767
[]
false
2023-04-20T16:52:05Z
2023-04-20T16:52:05Z
null
[ "Closing this one in favor of the same issue opened in the `transformers` repo." ]
completed
[]
How to use Distill-BERT with different datasets?
NONE
https://api.github.com/repos/huggingface/datasets/issues/5767/timeline
### Describe the bug - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Steps to reproduce the bug I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)? ### Expected behavior Distill-BERT should work with different datasets. ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
https://api.github.com/repos/huggingface/datasets
null
1,672,433,979
https://api.github.com/repos/huggingface/datasets/issues/5767/comments
I_kwDODunzps5jr1E7
null
5,767
https://api.github.com/repos/huggingface/datasets/issues/5767/events
false
open
2023-04-17T15:46:41Z
null
https://api.github.com/repos/huggingface/datasets/issues/5766
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5766/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5766/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/37540982?v=4", "events_url": "https://api.github.com/users/jmontalt/events{/privacy}", "followers_url": "https://api.github.com/users/jmontalt/followers", "following_url": "https://api.github.com/users/jmontalt/following{/other_user}", "gists_url": "https://api.github.com/users/jmontalt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmontalt", "id": 37540982, "login": "jmontalt", "node_id": "MDQ6VXNlcjM3NTQwOTgy", "organizations_url": "https://api.github.com/users/jmontalt/orgs", "received_events_url": "https://api.github.com/users/jmontalt/received_events", "repos_url": "https://api.github.com/users/jmontalt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmontalt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmontalt/subscriptions", "type": "User", "url": "https://api.github.com/users/jmontalt" }
https://github.com/huggingface/datasets/issues/5766
[]
false
2023-05-03T21:58:43Z
null
null
[ "Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed", "An interesting proposal indeed. \r\n\r\nPandas and Polars have the \"extension API\", so doing something similar on our side could be useful, too. However, this requires defining a common interface for the existing feature types before discussing the API/workflow for defining/sharing custom feature types, and this could take some time.\r\n\r\nIt would also be nice if the datasets viewer could render these custom types.", "Thank you for your replies! @lhoestq I have a use case involving whole-slide images in digital pathology. These are very large images (potentially gigapixel scale), so standard image tools are not suitable. Essentially, encoding/decoding can be done from/to [`OpenSlide`](https://openslide.org/api/python/) objects. Though there may be interest in this use case from the digital pathology community, it may not be sufficiently useful to suggest adding the feature type, but there will likely be many other use cases for a generic custom feature type.\r\n\r\nThank you for pointing out `set_transform`! I will make sure to keep this in mind in the future.\r\n\r\n@mariosasko An \"extension API\" sounds like a good idea, though I understand that this needs to be properly defined, and that you will need to discuss it internally. Support from the viewer would be awesome, too, though the generalization to arbitrary types sounds challenging.\r\n\r\nFor now, happy to know that you're considering the feature. Feel free to let me know if I can do anything to support the process." ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Support custom feature types
NONE
https://api.github.com/repos/huggingface/datasets/issues/5766/timeline
### Feature request I think it would be nice to allow registering custom feature types with the πŸ€— Datasets library. For example, allow to do something along the following lines: ``` from datasets.features import register_feature_type # this would be a new function @register_feature_type class CustomFeatureType: def encode_example(self, value): """User-provided logic to encode an example of this feature.""" pass def decode_example(self, value, token_per_repo_id=None): """User-provided logic to decode an example of this feature.""" pass ``` ### Motivation Users of πŸ€— Datasets, such as myself, may want to use the library to load datasets with unsupported feature types (i.e., beyond `ClassLabel`, `Image`, or `Audio`). This would be useful for prototyping new feature types and for feature types that aren't used widely enough to warrant inclusion in πŸ€— Datasets. At the moment, this is only possible by monkey-patching πŸ€— Datasets, which obfuscates the code and is prone to breaking with library updates. It also requires the user to write some custom code which could be easily avoided. ### Your contribution I would be happy to contribute this feature. My proposed solution would involve changing the following call to `globals()` to an explicit feature type registry, which a user-facing `register_feature_type` decorator could update. https://github.com/huggingface/datasets/blob/fd893098627230cc734f6009ad04cf885c979ac4/src/datasets/features/features.py#L1329 I would also provide an abstract base class for custom feature types which users could inherit. This would have at least an `encode_example` method and a `decode_example` method, similar to `Image` or `Audio`. The existing `encode_nested_example` and `decode_nested_example` functions would also need to be updated to correctly call the corresponding functions for the new type.
https://api.github.com/repos/huggingface/datasets
null
1,671,485,882
https://api.github.com/repos/huggingface/datasets/issues/5766/comments
I_kwDODunzps5joNm6
null
5,766
https://api.github.com/repos/huggingface/datasets/issues/5766/events
false
open
2023-04-17T15:00:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/5765
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5765/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5765/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii" }
https://github.com/huggingface/datasets/issues/5765
[]
false
2023-04-25T13:50:45Z
null
null
[ "You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n", "Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"client_2.py\", line 138, in <module>\r\n main()\r\n File \"client_2.py\", line 134, in main\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 208, in start_numpy_client\r\n start_client(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 142, in start_client\r\n client_message, sleep_duration, keep_going = handle(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 68, in handle\r\n return _fit(client, server_msg.fit_ins), 0, True\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 157, in _fit\r\n fit_res = client.fit(fit_ins)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 252, in _fit\r\n results = self.numpy_client.fit(parameters, ins.config) # type: ignore\r\n File \"client_2.py\", line 124, in fit\r\n train(net, trainloader, epochs=1)\r\n File \"client_2.py\", line 78, in train\r\n for batch in trainloader:\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 652, in __next__\r\n data = self._next_data()\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 692, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1525, in __getitem__\r\n return self._getitem(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1517, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 373, in query_table\r\n pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 55, in _query_table_with_indices_mapping\r\n return _query_table(table, key)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 79, in _query_table\r\n return table.fast_slice(key % table.num_rows, 1)\r\nZeroDivisionError: integer division or modulo by zero\r\n```\r\n\r\nThis is my code:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n#from transformers import tokenized_datasets\r\n\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n# DEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n\r\nDEVICE = \"cpu\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"yhavinga/imdb_dutch\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n # random 100 samples\r\n population = random.sample(range(len(raw_datasets[\"train\"])), 100)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n tokenized_datasets[\"train\"] = tokenized_datasets[\"train\"].select(population)\r\n tokenized_datasets[\"test\"] = tokenized_datasets[\"test\"].select(population)\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n # tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text_en\")\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets[\"train\"].column_names)\r\n \r\n tokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n \r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-4)\r\n net.train()\r\n for _ in range(epochs):\r\n for batch in trainloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n return float(loss), len(testloader), {\"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```", "Please also remove/comment these lines:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n```", "Thanks @mariosasko .\r\n\r\nNow, I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) which basically trains distil-BERT with IMDB dataset (very similar to this [tutorial](https://huggingface.co/docs/transformers/main/tasks/sequence_classification)). But I don't know why my accuracy isn't increasing even after training for a significant amount of time and also by using the entire dataset. Below I have attached `client.py` file:\r\n\r\n`client.py`:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n\r\nDEVICE = \"cuda:1\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"imdb\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-5)\r\n net.train()\r\n for i in range(epochs):\r\n print(\"Epoch: \", i+1)\r\n j = 1\r\n print(\"####################### The length of the trainloader is: \", len(trainloader)) \r\n for batch in trainloader:\r\n print(\"####################### The batch number is: \", j)\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n j += 1\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n print({\"loss\": float(loss), \"accuracy\": float(accuracy)})\r\n return float(loss), len(testloader), {\"loss\": float(loss), \"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:5040\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nCan I get any help, please?" ]
null
[]
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
NONE
https://api.github.com/repos/huggingface/datasets/issues/5765/timeline
### Describe the bug Following is my code that I am trying to run, but facing an error (have attached the whole error below): My code: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from datasets import load_dataset, load_metric from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification from transformers import AdamW #from transformers import tokenized_datasets warnings.filterwarnings("ignore", category=UserWarning) # DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") DEVICE = "cpu" CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint def load_data(): """Load IMDB data (training and eval)""" raw_datasets = load_dataset("yhavinga/imdb_dutch") raw_datasets = raw_datasets.shuffle(seed=42) # remove unnecessary data split del raw_datasets["unsupervised"] tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) def tokenize_function(examples): return tokenizer(examples["text"], truncation=True) # random 100 samples population = random.sample(range(len(raw_datasets["train"])), 100) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets["train"] = tokenized_datasets["train"].select(population) tokenized_datasets["test"] = tokenized_datasets["test"].select(population) # tokenized_datasets = tokenized_datasets.remove_columns("text") # tokenized_datasets = tokenized_datasets.rename_column("label", "labels") tokenized_datasets = tokenized_datasets.remove_columns("attention_mask") tokenized_datasets = tokenized_datasets.remove_columns("input_ids") tokenized_datasets = tokenized_datasets.remove_columns("label") tokenized_datasets = tokenized_datasets.remove_columns("text_en") # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets["train"].column_names) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=32, collate_fn=data_collator, ) testloader = DataLoader( tokenized_datasets["test"], batch_size=32, collate_fn=data_collator ) return trainloader, testloader def train(net, trainloader, epochs): optimizer = AdamW(net.parameters(), lr=5e-4) net.train() for _ in range(epochs): for batch in trainloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} outputs = net(**batch) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() def test(net, testloader): metric = load_metric("accuracy") loss = 0 net.eval() for batch in testloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} with torch.no_grad(): outputs = net(**batch) logits = outputs.logits loss += outputs.loss.item() predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) loss /= len(testloader.dataset) accuracy = metric.compute()["accuracy"] return loss, accuracy def main(): net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) trainloader, testloader = load_data() # Flower client class IMDBClient(fl.client.NumPyClient): def get_parameters(self, config): return [val.cpu().numpy() for _, val in net.state_dict().items()] def set_parameters(self, parameters): params_dict = zip(net.state_dict().keys(), parameters) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict, strict=True) def fit(self, parameters, config): self.set_parameters(parameters) print("Training Started...") train(net, trainloader, epochs=1) print("Training Finished.") return self.get_parameters(config={}), len(trainloader), {} def evaluate(self, parameters, config): self.set_parameters(parameters) loss, accuracy = test(net, testloader) return float(loss), len(testloader), {"accuracy": float(accuracy)} # Start client fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient()) if __name__ == "__main__": main() ``` Error: ``` Traceback (most recent call last): File "client_2.py", line 136, in <module> main() File "client_2.py", line 132, in main fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient()) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client start_client( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client client_message, sleep_duration, keep_going = handle( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 68, in handle return _fit(client, server_msg.fit_ins), 0, True File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 157, in _fit fit_res = client.fit(fit_ins) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 252, in _fit results = self.numpy_client.fit(parameters, ins.config) # type: ignore File "client_2.py", line 122, in fit train(net, trainloader, epochs=1) File "client_2.py", line 76, in train for batch in trainloader: File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in __next__ data = self._next_data() File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/saurav/.local/lib/python3.8/site-packages/transformers/data/data_collator.py", line 221, in __call__ batch = self.tokenizer.pad( File "/home/saurav/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2713, in pad raise ValueError( ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text'] ``` ### Steps to reproduce the bug Run the above code. ### Expected behavior Don't know, doing it for the first time. ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
https://api.github.com/repos/huggingface/datasets
null
1,671,388,824
https://api.github.com/repos/huggingface/datasets/issues/5765/comments
I_kwDODunzps5jn16Y
null
5,765
https://api.github.com/repos/huggingface/datasets/issues/5765/events
false
closed
2023-04-17T09:08:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/5764
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sauravtii", "id": 109907638, "login": "sauravtii", "node_id": "U_kgDOBo0Otg", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "repos_url": "https://api.github.com/users/sauravtii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "type": "User", "url": "https://api.github.com/users/sauravtii" }
https://github.com/huggingface/datasets/issues/5764
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-18T07:18:20Z
2023-04-18T07:18:20Z
null
[ "Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.", "Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```", "Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```", "That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?", "That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`." ]
completed
[]
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
NONE
https://api.github.com/repos/huggingface/datasets/issues/5764/timeline
### Describe the bug I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code: ``` dataset = load_dataset("josianem/imdb") ``` The dataset is not getting loaded and gives the error message as the following: ``` Traceback (most recent call last): File "sample.py", line 3, in <module> dataset = load_dataset("josianem/imdb") File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators archive = dl_manager.download(_DOWNLOAD_URL) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path output_path = get_from_cache( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 ``` ### Steps to reproduce the bug You can reproduce the error by using the following code: ``` from datasets import load_dataset, load_metric dataset = load_dataset("josianem/imdb") ``` ### Expected behavior The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior). ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
https://api.github.com/repos/huggingface/datasets
null
1,670,740,198
https://api.github.com/repos/huggingface/datasets/issues/5764/comments
I_kwDODunzps5jlXjm
null
5,764
https://api.github.com/repos/huggingface/datasets/issues/5764/events
false
closed
2023-04-17T06:03:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/5763
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5763/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5763/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1967608?v=4", "events_url": "https://api.github.com/users/csris/events{/privacy}", "followers_url": "https://api.github.com/users/csris/followers", "following_url": "https://api.github.com/users/csris/following{/other_user}", "gists_url": "https://api.github.com/users/csris/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/csris", "id": 1967608, "login": "csris", "node_id": "MDQ6VXNlcjE5Njc2MDg=", "organizations_url": "https://api.github.com/users/csris/orgs", "received_events_url": "https://api.github.com/users/csris/received_events", "repos_url": "https://api.github.com/users/csris/repos", "site_admin": false, "starred_url": "https://api.github.com/users/csris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/csris/subscriptions", "type": "User", "url": "https://api.github.com/users/csris" }
https://github.com/huggingface/datasets/pull/5763
[]
false
2023-04-17T15:01:53Z
2023-04-17T14:54:46Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.004984 / 0.011008 (-0.006024) | 0.096781 / 0.038508 (0.058273) | 0.033049 / 0.023109 (0.009939) | 0.297681 / 0.275898 (0.021783) | 0.329553 / 0.323480 (0.006073) | 0.005697 / 0.007986 (-0.002289) | 0.004019 / 0.004328 (-0.000310) | 0.072691 / 0.004250 (0.068441) | 0.046921 / 0.037052 (0.009868) | 0.311467 / 0.258489 (0.052978) | 0.337616 / 0.293841 (0.043775) | 0.042400 / 0.128546 (-0.086146) | 0.011919 / 0.075646 (-0.063727) | 0.331390 / 0.419271 (-0.087881) | 0.051004 / 0.043533 (0.007471) | 0.295317 / 0.255139 (0.040178) | 0.316570 / 0.283200 (0.033371) | 0.099283 / 0.141683 (-0.042400) | 1.430583 / 1.452155 (-0.021572) | 1.493550 / 1.492716 (0.000834) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213634 / 0.018006 (0.195628) | 0.432557 / 0.000490 (0.432067) | 0.001586 / 0.000200 (0.001386) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025249 / 0.037411 (-0.012162) | 0.105433 / 0.014526 (0.090908) | 0.113474 / 0.176557 (-0.063082) | 0.168799 / 0.737135 (-0.568336) | 0.119363 / 0.296338 (-0.176975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412450 / 0.215209 (0.197241) | 4.117432 / 2.077655 (2.039777) | 1.935176 / 1.504120 (0.431056) | 1.745674 / 1.541195 (0.204479) | 1.853872 / 1.468490 (0.385382) | 0.703429 / 4.584777 (-3.881348) | 3.756981 / 3.745712 (0.011269) | 3.730607 / 5.269862 (-1.539255) | 1.839052 / 4.565676 (-2.726624) | 0.087574 / 0.424275 (-0.336701) | 0.012293 / 0.007607 (0.004686) | 0.517234 / 0.226044 (0.291190) | 5.189759 / 2.268929 (2.920831) | 2.418739 / 55.444624 (-53.025885) | 2.081424 / 6.876477 (-4.795053) | 2.204464 / 2.142072 (0.062392) | 0.842768 / 4.805227 (-3.962459) | 0.169014 / 6.500664 (-6.331650) | 0.063711 / 0.075469 (-0.011758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180636 / 1.841788 (-0.661152) | 14.816088 / 8.074308 (6.741779) | 14.290085 / 10.191392 (4.098693) | 0.165267 / 0.680424 (-0.515156) | 0.017290 / 0.534201 (-0.516911) | 0.419678 / 0.579283 (-0.159605) | 0.418164 / 0.434364 (-0.016200) | 0.492210 / 0.540337 (-0.048127) | 0.588528 / 1.386936 (-0.798408) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.005223 / 0.011008 (-0.005785) | 0.073583 / 0.038508 (0.035075) | 0.033534 / 0.023109 (0.010425) | 0.339020 / 0.275898 (0.063122) | 0.366546 / 0.323480 (0.043066) | 0.006245 / 0.007986 (-0.001741) | 0.004081 / 0.004328 (-0.000247) | 0.073089 / 0.004250 (0.068839) | 0.047024 / 0.037052 (0.009971) | 0.342540 / 0.258489 (0.084051) | 0.379743 / 0.293841 (0.085902) | 0.037551 / 0.128546 (-0.090995) | 0.012246 / 0.075646 (-0.063400) | 0.084796 / 0.419271 (-0.334476) | 0.052256 / 0.043533 (0.008723) | 0.342675 / 0.255139 (0.087536) | 0.367157 / 0.283200 (0.083957) | 0.102939 / 0.141683 (-0.038744) | 1.409039 / 1.452155 (-0.043115) | 1.526137 / 1.492716 (0.033420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208143 / 0.018006 (0.190136) | 0.437940 / 0.000490 (0.437450) | 0.000424 / 0.000200 (0.000224) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028321 / 0.037411 (-0.009091) | 0.110417 / 0.014526 (0.095891) | 0.119449 / 0.176557 (-0.057107) | 0.168081 / 0.737135 (-0.569054) | 0.126658 / 0.296338 (-0.169681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429302 / 0.215209 (0.214093) | 4.270547 / 2.077655 (2.192892) | 2.061323 / 1.504120 (0.557203) | 1.857877 / 1.541195 (0.316682) | 1.873317 / 1.468490 (0.404827) | 0.688750 / 4.584777 (-3.896027) | 3.767951 / 3.745712 (0.022239) | 2.011436 / 5.269862 (-3.258426) | 1.299965 / 4.565676 (-3.265712) | 0.084799 / 0.424275 (-0.339476) | 0.012082 / 0.007607 (0.004475) | 0.521981 / 0.226044 (0.295937) | 5.265333 / 2.268929 (2.996405) | 2.494326 / 55.444624 (-52.950298) | 2.144672 / 6.876477 (-4.731804) | 2.365624 / 2.142072 (0.223551) | 0.839868 / 4.805227 (-3.965359) | 0.166614 / 6.500664 (-6.334050) | 0.063804 / 0.075469 (-0.011665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264623 / 1.841788 (-0.577164) | 14.946515 / 8.074308 (6.872207) | 14.450115 / 10.191392 (4.258723) | 0.163878 / 0.680424 (-0.516546) | 0.017501 / 0.534201 (-0.516700) | 0.420992 / 0.579283 (-0.158291) | 0.423005 / 0.434364 (-0.011359) | 0.489505 / 0.540337 (-0.050832) | 0.594631 / 1.386936 (-0.792305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd893098627230cc734f6009ad04cf885c979ac4 \"CML watermark\")\n" ]
null
[]
fix typo: "mow" -> "now"
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5763/timeline
I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now."
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5763.diff", "html_url": "https://github.com/huggingface/datasets/pull/5763", "merged_at": "2023-04-17T14:54:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5763.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5763" }
1,670,476,302
https://api.github.com/repos/huggingface/datasets/issues/5763/comments
PR_kwDODunzps5OcMI7
null
5,763
https://api.github.com/repos/huggingface/datasets/issues/5763/events
true
closed
2023-04-17T03:09:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/5762
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/surya-narayanan", "id": 17240858, "login": "surya-narayanan", "node_id": "MDQ6VXNlcjE3MjQwODU4", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "type": "User", "url": "https://api.github.com/users/surya-narayanan" }
https://github.com/huggingface/datasets/issues/5762
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-17T09:37:27Z
2023-04-17T09:37:27Z
null
[ "Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!" ]
completed
[]
Not able to load the pile
NONE
https://api.github.com/repos/huggingface/datasets/issues/5762/timeline
### Describe the bug Got this error when I am trying to load the pile dataset ``` TypeError: Couldn't cast array of type struct<file: string, id: string> to {'id': Value(dtype='string', id=None)} ``` ### Steps to reproduce the bug Please visit the following sample notebook https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB ### Expected behavior The pile should work ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,670,326,470
https://api.github.com/repos/huggingface/datasets/issues/5762/comments
I_kwDODunzps5jjyjG
null
5,762
https://api.github.com/repos/huggingface/datasets/issues/5762/events
false
open
2023-04-16T16:21:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/5761
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5761/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5761/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/69686152?v=4", "events_url": "https://api.github.com/users/blghtr/events{/privacy}", "followers_url": "https://api.github.com/users/blghtr/followers", "following_url": "https://api.github.com/users/blghtr/following{/other_user}", "gists_url": "https://api.github.com/users/blghtr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/blghtr", "id": 69686152, "login": "blghtr", "node_id": "MDQ6VXNlcjY5Njg2MTUy", "organizations_url": "https://api.github.com/users/blghtr/orgs", "received_events_url": "https://api.github.com/users/blghtr/received_events", "repos_url": "https://api.github.com/users/blghtr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/blghtr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blghtr/subscriptions", "type": "User", "url": "https://api.github.com/users/blghtr" }
https://github.com/huggingface/datasets/issues/5761
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-19T11:53:24Z
null
null
[ "Also, when generated from a zip archive, the dataset contains only a few images. In my case, 20 versus 2000+ contained in the archive. The generation from folders works as expected.", "Thanks for reporting, @blghtr.\r\n\r\nYou should include the `metadata.jsonl` in your ZIP archives, at the root level directory.\r\n\r\nI agree that our documentation is not clear enough. Maybe we could improve it.", "You can find a dummy dataset example here: https://huggingface.co/datasets/albertvillanova/tmp-imagefolder-metadata\r\n\r\n```\r\ntmp-imagefolder-metadata/\r\n└── data/\r\n β”œβ”€β”€ train.zip\r\n └── valid.zip\r\n```\r\nwhere, the directory structure within the `train.zip` archive is:\r\n```\r\nmetadata.jsonl\r\ntrain/\r\n β”œβ”€β”€ bharatanatyam/\r\n └── bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\r\n └── kathak/\r\n └── kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\r\n```\r\nand the metadata file contains:\r\n```\r\n{\"file_name\": \"train/bharatanatyam/bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\", \"text\": \"first\"}\r\n{\"file_name\": \"train/kathak/kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\", \"text\": \"second\"}\r\n```" ]
null
[]
One or several metadata.jsonl were found, but not in the same directory or in a parent directory
NONE
https://api.github.com/repos/huggingface/datasets/issues/5761/timeline
### Describe the bug An attempt to generate a dataset from a zip archive using imagefolder and metadata.jsonl does not lead to the expected result. Tried all possible locations of the json file: the file in the archive is ignored (generated dataset contains only images), the file next to the archive like [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder) leads to an error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1610, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1609 _time = time.time() -> 1610 for key, record in generator: 1611 if max_shard_size is not None and writer._num_bytes > max_shard_size: File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\packaged_modules\folder_based_builder\folder_based_builder.py:370, in FolderBasedBuilder._generate_examples(self, files, metadata_files, split_name, add_metadata, add_labels) 369 else: --> 370 raise ValueError( 371 f"One or several metadata.{metadata_ext} were found, but not in the same directory or in a parent directory of {downloaded_dir_file}." 372 ) 373 if metadata_dir is not None and downloaded_metadata_file is not None: ValueError: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of C:\Users\User\.cache\huggingface\datasets\downloads\extracted\f7fb7de25fb28ae63089974524f2d271a39d83888bc456d04aa3b3d45f33e6a6\ff0745a0-a741-4d9e-b228-a93b851adf61.png. The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset = load_dataset("imagefolder", data_dir=r'C:\Users\User\data') File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:986, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 982 split_dict.add(split_generator.split_info) 984 try: 985 # Prepare split will record examples associated to the split --> 986 self._prepare_split(split_generator, **prepare_split_kwargs) 987 except OSError as e: 988 raise OSError( 989 "Cannot find data file. " 990 + (self.manual_download_instructions or "") 991 + "\nOriginal error:\n" 992 + str(e) 993 ) from None File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1490, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1488 gen_kwargs = split_generator.gen_kwargs 1489 job_id = 0 -> 1490 for job_id, done, content in self._prepare_split_single( 1491 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1492 ): 1493 if done: 1494 result = content File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1646, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1644 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1645 e = e.__context__ -> 1646 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1648 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. Organize directory structure like in the docs: folder/metadata.jsonl folder/train.zip 2. Run load_dataset("imagefolder", data_dir='folder/metadata.jsonl', split='train') ### Expected behavior Dataset generated with all additional features from metadata.jsonl ### Environment info - `datasets` version: 2.11.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.0 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,670,034,582
https://api.github.com/repos/huggingface/datasets/issues/5761/comments
I_kwDODunzps5jirSW
null
5,761
https://api.github.com/repos/huggingface/datasets/issues/5761/events
false
open
2023-04-16T16:01:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/5760
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5760/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5760/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vvvm23", "id": 44398246, "login": "vvvm23", "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "repos_url": "https://api.github.com/users/vvvm23/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "type": "User", "url": "https://api.github.com/users/vvvm23" }
https://github.com/huggingface/datasets/issues/5760
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
false
2023-11-30T12:06:20Z
null
null
[ "Supporting this could be useful (I remember a use-case for this on the Hub). Do you agree @polinaeterna? \r\n\r\nImplementing this should be possible if we iterate over metadata files and build image/audio file paths instead of iterating over image/audio files and looking for the corresponding entries in metadata files.", "I've build a similar feature from scratch and would be interested to combine it with the datasets package.\r\n\r\nMy solution works something like this:\r\nInterpret the first element of each column as a file path. If the path exists and is a file, (try to) load the files for the entire column. Thereby, one isn't restricted to a particular column name, with comes in handy when dealing with multiple file columns.\r\n\r\nI've looked into the code to try to implement this, but didn't find the right places. I'm also open to contribute, but will need some guidance.", "Required here: https://discuss.huggingface.co/t/dataset-repo-requires-arbitrary-python-code-execution/59346/14" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Multi-image loading in Imagefolder dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5760/timeline
### Feature request Extend the `imagefolder` dataloading script to support loading multiple images per dataset entry. This only really makes sense if a metadata file is present. Currently you can use the following format (example `metadata.jsonl`: ``` {'file_name': 'path_to_image.png', 'metadata': ...} ... ``` which will return a batch with key `image` and any other metadata. I would propose extending `file_name` to also accept a list of files, which would return a batch with key `images` and any other metadata. ### Motivation This is useful for example in segmentation tasks in computer vision models, or in text-to-image models that also accept conditioning signals such as another image, feature map, or similar. Currently if I want to do this, I would need to write a custom dataset, rather than just use `imagefolder`. ### Your contribution Would be open to doing a PR, but also happy for someone else to take it as I am not familiar with the datasets library.
https://api.github.com/repos/huggingface/datasets
null
1,670,028,072
https://api.github.com/repos/huggingface/datasets/issues/5760/comments
I_kwDODunzps5jipso
null
5,760
https://api.github.com/repos/huggingface/datasets/issues/5760/events
false
open
2023-04-16T13:50:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/5759
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5759/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5759/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4", "events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}", "followers_url": "https://api.github.com/users/LZY-the-boys/followers", "following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}", "gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LZY-the-boys", "id": 72137647, "login": "LZY-the-boys", "node_id": "MDQ6VXNlcjcyMTM3NjQ3", "organizations_url": "https://api.github.com/users/LZY-the-boys/orgs", "received_events_url": "https://api.github.com/users/LZY-the-boys/received_events", "repos_url": "https://api.github.com/users/LZY-the-boys/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions", "type": "User", "url": "https://api.github.com/users/LZY-the-boys" }
https://github.com/huggingface/datasets/issues/5759
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-19T12:04:36Z
null
null
[ "Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is composed of one JSON object, where the names are the names of the columns, and the values are the values for the row-column pair." ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Can I load in list of list of dict format?
NONE
https://api.github.com/repos/huggingface/datasets/issues/5759/timeline
### Feature request my jsonl dataset has following format: ``` [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] ``` I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises ``` File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json ).read() File "site-packages/datasets/io/json.py", line 59, in read self.builder.download_and_prepare( File "site-packages/datasets/builder.py", line 872, in download_and_prepare self._download_and_prepare( File "site-packages/datasets/builder.py", line 967, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "site-packages/datasets/builder.py", line 1749, in _prepare_split for job_id, done, content in self._prepare_split_single( File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Motivation I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format ### Your contribution PR
https://api.github.com/repos/huggingface/datasets
null
1,669,977,848
https://api.github.com/repos/huggingface/datasets/issues/5759/comments
I_kwDODunzps5jidb4
null
5,759
https://api.github.com/repos/huggingface/datasets/issues/5759/events
false
closed
2023-04-16T11:56:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/5758
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5758/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5758/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
https://github.com/huggingface/datasets/pull/5758
[]
false
2023-04-20T15:37:49Z
2023-04-20T15:30:48Z
null
[ "The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?", "_The documentation is not available anymore as the PR was closed or merged._", "Done.\n\nOn Thu, Apr 20, 2023 at 6:01β€―PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Can you do that\n> before we merge ?\n>\n> β€”\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5758#issuecomment-1516488124>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73QPLA735AMN4PFDYRTXCFFTJANCNFSM6AAAAAAXACBUQU>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n", "Nice thanks !", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007161 / 0.011353 (-0.004192) | 0.005099 / 0.011008 (-0.005909) | 0.099301 / 0.038508 (0.060793) | 0.034144 / 0.023109 (0.011034) | 0.298273 / 0.275898 (0.022375) | 0.329009 / 0.323480 (0.005529) | 0.005486 / 0.007986 (-0.002500) | 0.003887 / 0.004328 (-0.000441) | 0.074769 / 0.004250 (0.070518) | 0.047505 / 0.037052 (0.010453) | 0.306550 / 0.258489 (0.048061) | 0.335380 / 0.293841 (0.041540) | 0.034796 / 0.128546 (-0.093750) | 0.012152 / 0.075646 (-0.063495) | 0.332194 / 0.419271 (-0.087077) | 0.049661 / 0.043533 (0.006128) | 0.296832 / 0.255139 (0.041693) | 0.316417 / 0.283200 (0.033218) | 0.098234 / 0.141683 (-0.043449) | 1.494114 / 1.452155 (0.041959) | 1.566468 / 1.492716 (0.073751) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221309 / 0.018006 (0.203303) | 0.440855 / 0.000490 (0.440365) | 0.003025 / 0.000200 (0.002825) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026594 / 0.037411 (-0.010817) | 0.110406 / 0.014526 (0.095880) | 0.116117 / 0.176557 (-0.060439) | 0.173502 / 0.737135 (-0.563633) | 0.121988 / 0.296338 (-0.174351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403307 / 0.215209 (0.188098) | 4.034146 / 2.077655 (1.956492) | 1.852162 / 1.504120 (0.348042) | 1.675643 / 1.541195 (0.134448) | 1.748851 / 1.468490 (0.280360) | 0.703458 / 4.584777 (-3.881319) | 3.809055 / 3.745712 (0.063343) | 2.118060 / 5.269862 (-3.151801) | 1.338394 / 4.565676 (-3.227282) | 0.086319 / 0.424275 (-0.337956) | 0.012195 / 0.007607 (0.004588) | 0.520814 / 0.226044 (0.294769) | 5.201074 / 2.268929 (2.932145) | 2.418384 / 55.444624 (-53.026240) | 2.085496 / 6.876477 (-4.790980) | 2.245638 / 2.142072 (0.103565) | 0.849042 / 4.805227 (-3.956185) | 0.171912 / 6.500664 (-6.328752) | 0.065691 / 0.075469 (-0.009778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159985 / 1.841788 (-0.681803) | 14.910867 / 8.074308 (6.836559) | 14.473926 / 10.191392 (4.282534) | 0.181532 / 0.680424 (-0.498891) | 0.017203 / 0.534201 (-0.516998) | 0.420805 / 0.579283 (-0.158479) | 0.426455 / 0.434364 (-0.007909) | 0.497086 / 0.540337 (-0.043251) | 0.593909 / 1.386936 (-0.793027) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007688 / 0.011353 (-0.003665) | 0.005353 / 0.011008 (-0.005656) | 0.076869 / 0.038508 (0.038361) | 0.035030 / 0.023109 (0.011921) | 0.344649 / 0.275898 (0.068751) | 0.387669 / 0.323480 (0.064190) | 0.005913 / 0.007986 (-0.002072) | 0.004107 / 0.004328 (-0.000221) | 0.074111 / 0.004250 (0.069860) | 0.049351 / 0.037052 (0.012299) | 0.346061 / 0.258489 (0.087572) | 0.395499 / 0.293841 (0.101658) | 0.035549 / 0.128546 (-0.092997) | 0.012340 / 0.075646 (-0.063307) | 0.087031 / 0.419271 (-0.332241) | 0.049088 / 0.043533 (0.005556) | 0.342774 / 0.255139 (0.087635) | 0.362037 / 0.283200 (0.078837) | 0.100329 / 0.141683 (-0.041354) | 1.442349 / 1.452155 (-0.009806) | 1.551079 / 1.492716 (0.058363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228458 / 0.018006 (0.210452) | 0.446190 / 0.000490 (0.445701) | 0.000413 / 0.000200 (0.000213) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029884 / 0.037411 (-0.007527) | 0.117527 / 0.014526 (0.103002) | 0.123221 / 0.176557 (-0.053335) | 0.172290 / 0.737135 (-0.564845) | 0.128682 / 0.296338 (-0.167657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420905 / 0.215209 (0.205696) | 4.199342 / 2.077655 (2.121687) | 2.007327 / 1.504120 (0.503207) | 1.814732 / 1.541195 (0.273537) | 1.893999 / 1.468490 (0.425509) | 0.712259 / 4.584777 (-3.872518) | 3.843402 / 3.745712 (0.097690) | 3.198514 / 5.269862 (-2.071348) | 1.678732 / 4.565676 (-2.886945) | 0.086435 / 0.424275 (-0.337840) | 0.012233 / 0.007607 (0.004626) | 0.526121 / 0.226044 (0.300077) | 5.190578 / 2.268929 (2.921650) | 2.473259 / 55.444624 (-52.971366) | 2.142795 / 6.876477 (-4.733682) | 2.277594 / 2.142072 (0.135521) | 0.846117 / 4.805227 (-3.959110) | 0.169458 / 6.500664 (-6.331206) | 0.065017 / 0.075469 (-0.010452) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272479 / 1.841788 (-0.569309) | 15.086473 / 8.074308 (7.012165) | 14.659728 / 10.191392 (4.468336) | 0.163915 / 0.680424 (-0.516509) | 0.017561 / 0.534201 (-0.516640) | 0.422074 / 0.579283 (-0.157209) | 0.421963 / 0.434364 (-0.012401) | 0.490321 / 0.540337 (-0.050016) | 0.586854 / 1.386936 (-0.800083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7ce0ac60c7efc10886471932854903a7c19f172 \"CML watermark\")\n" ]
null
[]
Fixes #5757
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5758/timeline
Fixes the bug #5757
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5758.diff", "html_url": "https://github.com/huggingface/datasets/pull/5758", "merged_at": "2023-04-20T15:30:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/5758.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5758" }
1,669,920,923
https://api.github.com/repos/huggingface/datasets/issues/5758/comments
PR_kwDODunzps5OaY9S
null
5,758
https://api.github.com/repos/huggingface/datasets/issues/5758/events
true
closed
2023-04-16T11:48:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/5757
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5757/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5757/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eli-osherovich", "id": 2437102, "login": "eli-osherovich", "node_id": "MDQ6VXNlcjI0MzcxMDI=", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "type": "User", "url": "https://api.github.com/users/eli-osherovich" }
https://github.com/huggingface/datasets/issues/5757
[]
false
2023-04-20T15:30:51Z
2023-04-20T15:30:51Z
null
[]
completed
[]
Tilde (~) is not supported
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5757/timeline
### Describe the bug It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception ### Steps to reproduce the bug ```python load_dataset("imagefolder", data_dir="~/data/my_dataset") ``` Will generate the following error: ``` EmptyDatasetError: The directory at /path/to/cwd/~/data/datasets/clementine_tagged_per_cam doesn't contain any data files ``` ### Expected behavior Load the dataset. ### Environment info datasets==2.11.0
https://api.github.com/repos/huggingface/datasets
null
1,669,910,503
https://api.github.com/repos/huggingface/datasets/issues/5757/comments
I_kwDODunzps5jiM_n
null
5,757
https://api.github.com/repos/huggingface/datasets/issues/5757/events
false
closed
2023-04-16T04:59:47Z
null
https://api.github.com/repos/huggingface/datasets/issues/5756
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5756/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5756/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/21077341?v=4", "events_url": "https://api.github.com/users/rohfle/events{/privacy}", "followers_url": "https://api.github.com/users/rohfle/followers", "following_url": "https://api.github.com/users/rohfle/following{/other_user}", "gists_url": "https://api.github.com/users/rohfle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rohfle", "id": 21077341, "login": "rohfle", "node_id": "MDQ6VXNlcjIxMDc3MzQx", "organizations_url": "https://api.github.com/users/rohfle/orgs", "received_events_url": "https://api.github.com/users/rohfle/received_events", "repos_url": "https://api.github.com/users/rohfle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rohfle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rohfle/subscriptions", "type": "User", "url": "https://api.github.com/users/rohfle" }
https://github.com/huggingface/datasets/issues/5756
[]
false
2023-04-18T03:40:56Z
2023-04-18T03:40:56Z
null
[ "Hi! I've merged a PR on the Hub with a fix: https://huggingface.co/datasets/fashion_mnist/discussions/3", "Thanks, this appears to have fixed the issue.\r\n\r\nI've created a PR for the same change in the mnist dataset: https://huggingface.co/datasets/mnist/discussions/3/files" ]
completed
[]
Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array"
NONE
https://api.github.com/repos/huggingface/datasets/issues/5756/timeline
### Describe the bug When calling shuffle on a IterableDataset with streaming=True, I get the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 627, in __iter__ for x in self.ex_iterable: File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 138, in __iter__ yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) File "/home/administrator/.cache/huggingface/modules/datasets_modules/datasets/mnist/fda16c03c4ecfb13f165ba7e29cf38129ce035011519968cdaf74894ce91c9d4/mnist.py", line 111, in _generate_examples images = np.frombuffer(f.read(), dtype=np.uint8).reshape(size, 28, 28) ValueError: cannot reshape array of size 59992 into shape (60000,28,28) ``` Tested with the fashion_mnist and mnist datasets ### Steps to reproduce the bug Code to reproduce ```python from datasets import load_dataset SHUFFLE_SEED = 42 SHUFFLE_BUFFER_SIZE = 10_000 dataset = load_dataset('fashion_mnist', streaming=True).shuffle(seed=SHUFFLE_SEED, buffer_size=SHUFFLE_BUFFER_SIZE) next(iter(dataset['train'])) ``` ### Expected behavior A random item from the dataset and no error ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
https://api.github.com/repos/huggingface/datasets
null
1,669,678,080
https://api.github.com/repos/huggingface/datasets/issues/5756/comments
I_kwDODunzps5jhUQA
null
5,756
https://api.github.com/repos/huggingface/datasets/issues/5756/events
false
closed
2023-04-14T23:28:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/5755
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5755/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5755/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1405491?v=4", "events_url": "https://api.github.com/users/fivejjs/events{/privacy}", "followers_url": "https://api.github.com/users/fivejjs/followers", "following_url": "https://api.github.com/users/fivejjs/following{/other_user}", "gists_url": "https://api.github.com/users/fivejjs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fivejjs", "id": 1405491, "login": "fivejjs", "node_id": "MDQ6VXNlcjE0MDU0OTE=", "organizations_url": "https://api.github.com/users/fivejjs/orgs", "received_events_url": "https://api.github.com/users/fivejjs/received_events", "repos_url": "https://api.github.com/users/fivejjs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fivejjs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fivejjs/subscriptions", "type": "User", "url": "https://api.github.com/users/fivejjs" }
https://github.com/huggingface/datasets/issues/5755
[]
false
2023-04-14T23:36:19Z
2023-04-14T23:36:19Z
null
[ "update the version. fix" ]
completed
[]
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
NONE
https://api.github.com/repos/huggingface/datasets/issues/5755/timeline
### Describe the bug The module moved to new place? ### Steps to reproduce the bug in the import step, ```python from datasets.utils.deprecation_utils import DeprecatedEnum ``` error: ``` ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' ``` ### Expected behavior import successfully ### Environment info python==3.9.16 datasets==1.18.3
https://api.github.com/repos/huggingface/datasets
null
1,669,048,438
https://api.github.com/repos/huggingface/datasets/issues/5755/comments
I_kwDODunzps5je6h2
null
5,755
https://api.github.com/repos/huggingface/datasets/issues/5755/events
false
closed
2023-04-14T18:15:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/5754
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5754/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5754/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5754
[]
false
2023-04-20T15:27:58Z
2023-04-20T15:21:00Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004592 / 0.011008 (-0.006416) | 0.097239 / 0.038508 (0.058731) | 0.028609 / 0.023109 (0.005499) | 0.309225 / 0.275898 (0.033327) | 0.340015 / 0.323480 (0.016535) | 0.004857 / 0.007986 (-0.003129) | 0.004649 / 0.004328 (0.000320) | 0.074770 / 0.004250 (0.070520) | 0.038351 / 0.037052 (0.001299) | 0.313360 / 0.258489 (0.054871) | 0.350256 / 0.293841 (0.056416) | 0.030770 / 0.128546 (-0.097776) | 0.011591 / 0.075646 (-0.064055) | 0.322444 / 0.419271 (-0.096828) | 0.043704 / 0.043533 (0.000171) | 0.311790 / 0.255139 (0.056651) | 0.339183 / 0.283200 (0.055984) | 0.088041 / 0.141683 (-0.053642) | 1.490649 / 1.452155 (0.038494) | 1.561789 / 1.492716 (0.069072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208984 / 0.018006 (0.190978) | 0.406105 / 0.000490 (0.405616) | 0.003152 / 0.000200 (0.002952) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022622 / 0.037411 (-0.014790) | 0.095819 / 0.014526 (0.081294) | 0.105132 / 0.176557 (-0.071424) | 0.165684 / 0.737135 (-0.571451) | 0.106706 / 0.296338 (-0.189632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426126 / 0.215209 (0.210917) | 4.233864 / 2.077655 (2.156209) | 1.918727 / 1.504120 (0.414607) | 1.729905 / 1.541195 (0.188710) | 1.760342 / 1.468490 (0.291852) | 0.695449 / 4.584777 (-3.889328) | 3.413531 / 3.745712 (-0.332181) | 1.904557 / 5.269862 (-3.365305) | 1.270604 / 4.565676 (-3.295072) | 0.083018 / 0.424275 (-0.341257) | 0.012760 / 0.007607 (0.005152) | 0.523991 / 0.226044 (0.297947) | 5.236132 / 2.268929 (2.967204) | 2.360959 / 55.444624 (-53.083665) | 1.996533 / 6.876477 (-4.879943) | 2.072934 / 2.142072 (-0.069138) | 0.804133 / 4.805227 (-4.001094) | 0.150976 / 6.500664 (-6.349688) | 0.065503 / 0.075469 (-0.009966) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211828 / 1.841788 (-0.629960) | 13.657743 / 8.074308 (5.583435) | 13.887148 / 10.191392 (3.695756) | 0.145996 / 0.680424 (-0.534428) | 0.016562 / 0.534201 (-0.517639) | 0.380359 / 0.579283 (-0.198924) | 0.388698 / 0.434364 (-0.045666) | 0.440373 / 0.540337 (-0.099965) | 0.531753 / 1.386936 (-0.855183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.004569 / 0.011008 (-0.006439) | 0.076239 / 0.038508 (0.037731) | 0.028462 / 0.023109 (0.005352) | 0.365540 / 0.275898 (0.089642) | 0.398242 / 0.323480 (0.074762) | 0.005785 / 0.007986 (-0.002200) | 0.003346 / 0.004328 (-0.000982) | 0.076296 / 0.004250 (0.072046) | 0.039853 / 0.037052 (0.002800) | 0.367684 / 0.258489 (0.109195) | 0.409570 / 0.293841 (0.115730) | 0.030536 / 0.128546 (-0.098010) | 0.011534 / 0.075646 (-0.064112) | 0.084962 / 0.419271 (-0.334309) | 0.042708 / 0.043533 (-0.000825) | 0.344058 / 0.255139 (0.088919) | 0.389096 / 0.283200 (0.105897) | 0.090559 / 0.141683 (-0.051124) | 1.507101 / 1.452155 (0.054946) | 1.563977 / 1.492716 (0.071260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228740 / 0.018006 (0.210734) | 0.396890 / 0.000490 (0.396400) | 0.000392 / 0.000200 (0.000192) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025052 / 0.037411 (-0.012360) | 0.099951 / 0.014526 (0.085426) | 0.106847 / 0.176557 (-0.069710) | 0.156666 / 0.737135 (-0.580469) | 0.110344 / 0.296338 (-0.185994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442363 / 0.215209 (0.227154) | 4.429571 / 2.077655 (2.351917) | 2.076501 / 1.504120 (0.572381) | 1.875226 / 1.541195 (0.334031) | 1.909093 / 1.468490 (0.440603) | 0.703047 / 4.584777 (-3.881730) | 3.457036 / 3.745712 (-0.288676) | 2.866648 / 5.269862 (-2.403214) | 1.524430 / 4.565676 (-3.041246) | 0.083687 / 0.424275 (-0.340588) | 0.012251 / 0.007607 (0.004643) | 0.543945 / 0.226044 (0.317901) | 5.440559 / 2.268929 (3.171630) | 2.522924 / 55.444624 (-52.921700) | 2.188770 / 6.876477 (-4.687707) | 2.249632 / 2.142072 (0.107559) | 0.813499 / 4.805227 (-3.991728) | 0.152861 / 6.500664 (-6.347803) | 0.067189 / 0.075469 (-0.008280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284255 / 1.841788 (-0.557533) | 14.207864 / 8.074308 (6.133556) | 14.279691 / 10.191392 (4.088299) | 0.167027 / 0.680424 (-0.513396) | 0.016455 / 0.534201 (-0.517746) | 0.380798 / 0.579283 (-0.198485) | 0.390013 / 0.434364 (-0.044351) | 0.445493 / 0.540337 (-0.094845) | 0.526278 / 1.386936 (-0.860658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3fdb46c526b9d070df0eb2d56b0ecacdace7cb9a \"CML watermark\")\n" ]
null
[]
Minor tqdm fixes
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5754/timeline
`GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560). Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again, this bug was introduced by me in the linked PR 😎)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5754.diff", "html_url": "https://github.com/huggingface/datasets/pull/5754", "merged_at": "2023-04-20T15:21:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/5754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5754" }
1,668,755,035
https://api.github.com/repos/huggingface/datasets/issues/5754/comments
PR_kwDODunzps5OWozh
null
5,754
https://api.github.com/repos/huggingface/datasets/issues/5754/events
true
closed
2023-04-14T17:32:31Z
null
https://api.github.com/repos/huggingface/datasets/issues/5753
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5753/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5753/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
https://github.com/huggingface/datasets/issues/5753
[]
false
2023-04-14T17:45:52Z
2023-04-14T17:36:37Z
null
[ "Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn_1 = [f\"new dataset 1, row {i}\" for i in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = new_features[\"file\"] #Β I know that \"file\" has the right column type to match our new feature\r\n\r\ndef add_column_fn_1(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_1[idx]}\r\n\r\nmodified_dataset_1 = original_dataset.map(add_column_fn_1, with_indices=True, features=new_features)\r\n\r\n# now create a second modified dataset using the same trick\r\ncolumn_2 = [f\"new dataset 2, row {i}\" for i in range(50)]\r\n\r\ndef add_column_fn_2(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_2[idx]}\r\n\r\nmodified_dataset_2 = original_dataset.map(add_column_fn_2, with_indices=True, features=new_features)\r\n\r\ninterleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2])\r\n\r\nfor i, sample in enumerate(interleaved_dataset):\r\n print(sample[\"new_column\"])\r\n if i == 10:\r\n break\r\n```\r\nwe get the correct outputs:\r\n```python\r\nnew dataset 1, row 0\r\nnew dataset 2, row 0\r\nnew dataset 1, row 1\r\nnew dataset 2, row 1\r\nnew dataset 1, row 2\r\nnew dataset 2, row 2\r\nnew dataset 1, row 3\r\nnew dataset 2, row 3\r\nnew dataset 1, row 4\r\nnew dataset 2, row 4\r\nnew dataset 1, row 5\r\n```\r\n" ]
completed
[]
[IterableDatasets] Add column followed by interleave datasets gives bogus outputs
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5753/timeline
### Describe the bug If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value. ### Steps to reproduce the bug What we're going to do here is: 1. Load an iterable dataset in streaming mode (`original_dataset`) 2. Add a new column to this dataset using the hack in #5752 (`modified_dataset_1`) 3. Create another new dataset by adding a column with the same key but different values (`modified_dataset_2`) 4. Interleave our new datasets (`modified_dataset_1` + `modified_dataset_2`) 5. Check the value of our newly added column (`new_column`) ```python from datasets import load_dataset # load an iterable dataset original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # now add a new column to our streaming dataset using our hack from 5752 name = "new_column" column = [f"new dataset 1, row {i}" for i in range(50)] new_features = original_dataset.features.copy() new_features[name] = new_features["file"] #Β I know that "file" has the right column type to match our new feature def add_column_fn(example, idx): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} modified_dataset_1 = original_dataset.map(add_column_fn, with_indices=True, features=new_features) # now create a second modified dataset using the same trick column = [f"new dataset 2, row {i}" for i in range(50)] def add_column_fn(example, idx): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: column[idx]} modified_dataset_2 = original_dataset.map(add_column_fn, with_indices=True, features=new_features) # interleave these datasets interleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2]) # now check what the value of the added column is for i, sample in enumerate(interleaved_dataset): print(sample["new_column"]) if i == 10: break ``` **Print Output:** ``` new dataset 2, row 0 new dataset 2, row 0 new dataset 2, row 1 new dataset 2, row 1 new dataset 2, row 2 new dataset 2, row 2 new dataset 2, row 3 new dataset 2, row 3 new dataset 2, row 4 new dataset 2, row 4 new dataset 2, row 5 ``` We see that we only get outputs from our second dataset. ### Expected behavior We should interleave between dataset 1 and 2 and increase in row value: ``` new dataset 1, row 0 new dataset 2, row 0 new dataset 1, row 1 new dataset 2, row 1 new dataset 1, row 2 new dataset 2, row 2 ... ``` ### Environment info - datasets version: 2.10.2.dev0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
https://api.github.com/repos/huggingface/datasets
null
1,668,659,536
https://api.github.com/repos/huggingface/datasets/issues/5753/comments
I_kwDODunzps5jdblQ
null
5,753
https://api.github.com/repos/huggingface/datasets/issues/5753/events
false
open
2023-04-14T16:39:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/5752
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5752/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5752/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
https://github.com/huggingface/datasets/issues/5752
[]
false
2024-01-18T10:15:20Z
null
null
[ "I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r\nfrom datasets import load_dataset, Value\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\nprint(original_dataset.features.keys())\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn = [\"some random text\" for _ in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = Value(dtype=\"string\", id=None) #Β I know the correct column type for this feature\r\n\r\ndef add_column_fn(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column[idx]}\r\n\r\nmodified_dataset = original_dataset.map(add_column_fn, with_indices=True, features=new_features)\r\n\r\nprint(modified_dataset.features.keys())\r\n```\r\n**Print Output:**\r\n```\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])\r\n```\r\n", "It seems that map will also cause this issue\r\n\r\n### Steps to reproduce the bug\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\nprint(original_dataset.features.keys())\r\n\r\ndef test(data):\r\n return data\r\n\r\nmodified_dataset = original_dataset.map(test)\r\nprint(modified_dataset.features.keys())\r\n```\r\n\r\n### Output\r\n```\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[5], line 10\r\n 7 return data\r\n 9 modified_dataset = original_dataset.map(test)\r\n---> 10 print(modified_dataset.features.keys())\r\n\r\nAttributeError: 'NoneType' object has no attribute 'keys'\r\n```" ]
null
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Streaming dataset looses `.feature` method after `.add_column`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5752/timeline
### Describe the bug After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method. ### Steps to reproduce the bug ```python from datasets import load_dataset original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) print(original_dataset.features.keys()) # now add a new column to our streaming dataset modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)]) print(modified_dataset.features.keys()) ``` **Print Output:** ``` dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 8 6 # now add a new column to our streaming dataset 7 modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)]) ----> 8 print(modified_dataset.features.keys()) AttributeError: 'NoneType' object has no attribute 'keys' ``` We see that we get the features for the original dataset, but not the modified one with the added column. ### Expected behavior Features should be persevered after adding a new column, i.e. calling: ```python print(modified_dataset.features.keys()) ``` Should return: ``` dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column']) ``` ### Environment info - `datasets` version: 2.10.2.dev0 - Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.3 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
https://api.github.com/repos/huggingface/datasets
null
1,668,574,209
https://api.github.com/repos/huggingface/datasets/issues/5752/comments
I_kwDODunzps5jdGwB
null
5,752
https://api.github.com/repos/huggingface/datasets/issues/5752/events
false
closed
2023-04-14T14:13:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/5751
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5751/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5751/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/5751
[]
false
2023-04-20T14:43:20Z
2023-04-20T14:40:34Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010459 / 0.011353 (-0.000894) | 0.007009 / 0.011008 (-0.003999) | 0.153885 / 0.038508 (0.115377) | 0.037308 / 0.023109 (0.014199) | 0.431931 / 0.275898 (0.156033) | 0.452940 / 0.323480 (0.129461) | 0.008572 / 0.007986 (0.000586) | 0.007479 / 0.004328 (0.003150) | 0.093835 / 0.004250 (0.089584) | 0.050172 / 0.037052 (0.013120) | 0.428855 / 0.258489 (0.170366) | 0.517814 / 0.293841 (0.223974) | 0.058558 / 0.128546 (-0.069988) | 0.019550 / 0.075646 (-0.056096) | 0.449837 / 0.419271 (0.030566) | 0.069710 / 0.043533 (0.026177) | 0.444163 / 0.255139 (0.189024) | 0.469003 / 0.283200 (0.185803) | 0.114665 / 0.141683 (-0.027018) | 1.822415 / 1.452155 (0.370261) | 1.956360 / 1.492716 (0.463644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237489 / 0.018006 (0.219483) | 0.556947 / 0.000490 (0.556457) | 0.006988 / 0.000200 (0.006789) | 0.000499 / 0.000054 (0.000444) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037047 / 0.037411 (-0.000364) | 0.133973 / 0.014526 (0.119447) | 0.137072 / 0.176557 (-0.039485) | 0.201520 / 0.737135 (-0.535615) | 0.144177 / 0.296338 (-0.152161) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.694853 / 0.215209 (0.479644) | 6.805746 / 2.077655 (4.728091) | 2.717864 / 1.504120 (1.213744) | 2.360529 / 1.541195 (0.819335) | 2.384403 / 1.468490 (0.915913) | 1.337512 / 4.584777 (-3.247265) | 5.734090 / 3.745712 (1.988378) | 5.344909 / 5.269862 (0.075047) | 2.906218 / 4.565676 (-1.659458) | 0.160148 / 0.424275 (-0.264127) | 0.015159 / 0.007607 (0.007551) | 0.871356 / 0.226044 (0.645312) | 8.550965 / 2.268929 (6.282037) | 3.613522 / 55.444624 (-51.831103) | 2.868508 / 6.876477 (-4.007969) | 2.912263 / 2.142072 (0.770190) | 1.652548 / 4.805227 (-3.152680) | 0.274117 / 6.500664 (-6.226547) | 0.085911 / 0.075469 (0.010442) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624798 / 1.841788 (-0.216989) | 18.413303 / 8.074308 (10.338995) | 21.742854 / 10.191392 (11.551462) | 0.255937 / 0.680424 (-0.424487) | 0.029492 / 0.534201 (-0.504709) | 0.541932 / 0.579283 (-0.037351) | 0.638594 / 0.434364 (0.204230) | 0.607427 / 0.540337 (0.067090) | 0.763046 / 1.386936 (-0.623890) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.020543 / 0.011353 (0.009190) | 0.006079 / 0.011008 (-0.004929) | 0.100558 / 0.038508 (0.062050) | 0.039474 / 0.023109 (0.016365) | 0.468889 / 0.275898 (0.192991) | 0.477731 / 0.323480 (0.154251) | 0.006999 / 0.007986 (-0.000987) | 0.005845 / 0.004328 (0.001516) | 0.110022 / 0.004250 (0.105772) | 0.056885 / 0.037052 (0.019833) | 0.447296 / 0.258489 (0.188807) | 0.489007 / 0.293841 (0.195166) | 0.055086 / 0.128546 (-0.073460) | 0.020623 / 0.075646 (-0.055024) | 0.129599 / 0.419271 (-0.289672) | 0.064316 / 0.043533 (0.020784) | 0.446681 / 0.255139 (0.191542) | 0.488897 / 0.283200 (0.205698) | 0.119121 / 0.141683 (-0.022562) | 1.836248 / 1.452155 (0.384093) | 2.002456 / 1.492716 (0.509740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249344 / 0.018006 (0.231338) | 0.544320 / 0.000490 (0.543830) | 0.000459 / 0.000200 (0.000259) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038771 / 0.037411 (0.001359) | 0.129527 / 0.014526 (0.115002) | 0.144681 / 0.176557 (-0.031876) | 0.208237 / 0.737135 (-0.528898) | 0.149502 / 0.296338 (-0.146836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668457 / 0.215209 (0.453248) | 6.729550 / 2.077655 (4.651895) | 2.741076 / 1.504120 (1.236956) | 2.394737 / 1.541195 (0.853542) | 2.415242 / 1.468490 (0.946752) | 1.322334 / 4.584777 (-3.262442) | 5.787454 / 3.745712 (2.041742) | 3.309847 / 5.269862 (-1.960015) | 2.199181 / 4.565676 (-2.366495) | 0.170740 / 0.424275 (-0.253535) | 0.015095 / 0.007607 (0.007487) | 0.864157 / 0.226044 (0.638112) | 8.701858 / 2.268929 (6.432929) | 3.617966 / 55.444624 (-51.826658) | 2.847144 / 6.876477 (-4.029332) | 3.011391 / 2.142072 (0.869319) | 1.595466 / 4.805227 (-3.209762) | 0.284010 / 6.500664 (-6.216654) | 0.091054 / 0.075469 (0.015585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702404 / 1.841788 (-0.139384) | 19.427130 / 8.074308 (11.352822) | 21.900446 / 10.191392 (11.709053) | 0.244088 / 0.680424 (-0.436336) | 0.027428 / 0.534201 (-0.506773) | 0.552226 / 0.579283 (-0.027057) | 0.653102 / 0.434364 (0.218738) | 0.635379 / 0.540337 (0.095042) | 0.771842 / 1.386936 (-0.615094) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#efde2a0b9ad937defc83e0ac3f14bbb90fb5f345 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004806) | 0.004569 / 0.011008 (-0.006439) | 0.097782 / 0.038508 (0.059274) | 0.028157 / 0.023109 (0.005048) | 0.319017 / 0.275898 (0.043119) | 0.340758 / 0.323480 (0.017278) | 0.005078 / 0.007986 (-0.002907) | 0.003343 / 0.004328 (-0.000985) | 0.074194 / 0.004250 (0.069944) | 0.037918 / 0.037052 (0.000866) | 0.310298 / 0.258489 (0.051809) | 0.349441 / 0.293841 (0.055600) | 0.030375 / 0.128546 (-0.098171) | 0.011527 / 0.075646 (-0.064119) | 0.320499 / 0.419271 (-0.098773) | 0.042639 / 0.043533 (-0.000894) | 0.312182 / 0.255139 (0.057043) | 0.329058 / 0.283200 (0.045858) | 0.085517 / 0.141683 (-0.056165) | 1.532603 / 1.452155 (0.080448) | 1.583996 / 1.492716 (0.091279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208286 / 0.018006 (0.190280) | 0.418696 / 0.000490 (0.418206) | 0.007051 / 0.000200 (0.006851) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024055 / 0.037411 (-0.013356) | 0.098420 / 0.014526 (0.083894) | 0.104785 / 0.176557 (-0.071771) | 0.163618 / 0.737135 (-0.573517) | 0.110006 / 0.296338 (-0.186332) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418756 / 0.215209 (0.203547) | 4.179557 / 2.077655 (2.101902) | 1.881708 / 1.504120 (0.377588) | 1.683393 / 1.541195 (0.142198) | 1.731909 / 1.468490 (0.263419) | 0.696674 / 4.584777 (-3.888103) | 3.384167 / 3.745712 (-0.361545) | 3.173479 / 5.269862 (-2.096382) | 1.620019 / 4.565676 (-2.945658) | 0.082850 / 0.424275 (-0.341426) | 0.012396 / 0.007607 (0.004789) | 0.519743 / 0.226044 (0.293699) | 5.208480 / 2.268929 (2.939552) | 2.312917 / 55.444624 (-53.131708) | 1.963486 / 6.876477 (-4.912991) | 2.084553 / 2.142072 (-0.057519) | 0.805486 / 4.805227 (-3.999742) | 0.153429 / 6.500664 (-6.347235) | 0.069451 / 0.075469 (-0.006018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197185 / 1.841788 (-0.644603) | 14.341005 / 8.074308 (6.266696) | 14.476162 / 10.191392 (4.284770) | 0.157372 / 0.680424 (-0.523052) | 0.016444 / 0.534201 (-0.517757) | 0.383721 / 0.579283 (-0.195562) | 0.380800 / 0.434364 (-0.053564) | 0.441137 / 0.540337 (-0.099200) | 0.524778 / 1.386936 (-0.862158) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.004536 / 0.011008 (-0.006472) | 0.076266 / 0.038508 (0.037757) | 0.028133 / 0.023109 (0.005024) | 0.351072 / 0.275898 (0.075174) | 0.375823 / 0.323480 (0.052344) | 0.005166 / 0.007986 (-0.002819) | 0.004717 / 0.004328 (0.000388) | 0.076130 / 0.004250 (0.071880) | 0.041354 / 0.037052 (0.004301) | 0.345904 / 0.258489 (0.087415) | 0.384119 / 0.293841 (0.090278) | 0.030759 / 0.128546 (-0.097787) | 0.011659 / 0.075646 (-0.063988) | 0.085269 / 0.419271 (-0.334002) | 0.042161 / 0.043533 (-0.001372) | 0.340806 / 0.255139 (0.085667) | 0.366832 / 0.283200 (0.083632) | 0.092187 / 0.141683 (-0.049495) | 1.520035 / 1.452155 (0.067880) | 1.603856 / 1.492716 (0.111140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237763 / 0.018006 (0.219757) | 0.413406 / 0.000490 (0.412916) | 0.000415 / 0.000200 (0.000215) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026095 / 0.037411 (-0.011317) | 0.105775 / 0.014526 (0.091249) | 0.108452 / 0.176557 (-0.068105) | 0.160014 / 0.737135 (-0.577122) | 0.112385 / 0.296338 (-0.183953) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437327 / 0.215209 (0.222118) | 4.374949 / 2.077655 (2.297294) | 2.090292 / 1.504120 (0.586172) | 1.885946 / 1.541195 (0.344752) | 1.946768 / 1.468490 (0.478278) | 0.704124 / 4.584777 (-3.880653) | 3.394994 / 3.745712 (-0.350718) | 1.905189 / 5.269862 (-3.364673) | 1.182300 / 4.565676 (-3.383376) | 0.082920 / 0.424275 (-0.341355) | 0.012781 / 0.007607 (0.005174) | 0.535467 / 0.226044 (0.309423) | 5.362799 / 2.268929 (3.093870) | 2.504825 / 55.444624 (-52.939799) | 2.180458 / 6.876477 (-4.696019) | 2.317750 / 2.142072 (0.175677) | 0.811182 / 4.805227 (-3.994045) | 0.151654 / 6.500664 (-6.349010) | 0.067925 / 0.075469 (-0.007544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290746 / 1.841788 (-0.551042) | 14.799309 / 8.074308 (6.725001) | 14.439722 / 10.191392 (4.248330) | 0.144358 / 0.680424 (-0.536066) | 0.016688 / 0.534201 (-0.517513) | 0.392907 / 0.579283 (-0.186376) | 0.383109 / 0.434364 (-0.051255) | 0.450069 / 0.540337 (-0.090269) | 0.532534 / 1.386936 (-0.854402) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87c061032972509a2a1b4103763e62fb74912128 \"CML watermark\")\n", "I turned it into a draft to fix the failing tests, but CI is now green, so there is no good reason for it :)" ]
null
[]
Consistent ArrayXD Python formatting + better NumPy/Pandas formatting
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5751/timeline
Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Pandas. (Reported in https://github.com/huggingface/datasets/issues/5719#issuecomment-1507579671)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5751.diff", "html_url": "https://github.com/huggingface/datasets/pull/5751", "merged_at": "2023-04-20T14:40:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/5751.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5751" }
1,668,333,316
https://api.github.com/repos/huggingface/datasets/issues/5751/comments
PR_kwDODunzps5OVMuT
null
5,751
https://api.github.com/repos/huggingface/datasets/issues/5751/events
true
closed
2023-04-14T13:50:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/5750
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5750/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5750/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/895720?v=4", "events_url": "https://api.github.com/users/ivanprado/events{/privacy}", "followers_url": "https://api.github.com/users/ivanprado/followers", "following_url": "https://api.github.com/users/ivanprado/following{/other_user}", "gists_url": "https://api.github.com/users/ivanprado/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ivanprado", "id": 895720, "login": "ivanprado", "node_id": "MDQ6VXNlcjg5NTcyMA==", "organizations_url": "https://api.github.com/users/ivanprado/orgs", "received_events_url": "https://api.github.com/users/ivanprado/received_events", "repos_url": "https://api.github.com/users/ivanprado/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ivanprado/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ivanprado/subscriptions", "type": "User", "url": "https://api.github.com/users/ivanprado" }
https://github.com/huggingface/datasets/issues/5750
[]
false
2023-04-17T12:20:43Z
2023-04-17T12:20:43Z
null
[ "`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(rows)\r\n\r\nfor r in ds:\r\n print(r)\r\n```", "@mariosasko your code was incomplete, so I tried to fix it:\r\n\r\n```py\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen():\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nThe error is also present in this case:\r\n\r\n```\r\n_pickle.PicklingError: Pickling client objects is explicitly not supported.\r\nClients have non-trivial state that is local and unpickleable.\r\n```\r\n\r\nI think it doesn't matter if the generator is an object or a function. The problem is that the generator is referencing an object that is not pickable (the client in this case). ", "It does matter: this function expects a generator function, as stated in the docs.\r\n\r\nThis should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\ndef gen():\r\n client = bigquery.Client()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nWe could allow passing non-picklable objects and use a random hash for the generated arrow file. In that case, the caching mechanism would not work, meaning repeated calls with the same set of arguments would generate new datasets instead of reusing the cached version, but this behavior is still better than raising an error.", "Thank you @mariosasko . Your last code is working indeed. Curiously, the important detail here was to wrap the client instantiation within the generator itself. If the line `client = bigquery.Client()` is moved outside, then the error is back.\r\n\r\nI see now also your point in regard to the generator being a generator function. We can close the issue if you want." ]
completed
[]
Fail to create datasets from a generator when using Google Big Query
NONE
https://api.github.com/repos/huggingface/datasets/issues/5750/timeline
### Describe the bug Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries to get a hash of the generator by pickling it. So the following error is generated: ``` _pickle.PicklingError: Pickling client objects is explicitly not supported. Clients have non-trivial state that is local and unpickleable. ``` ### Steps to reproduce the bug 1. Install the big query client and datasets `pip install google-cloud-bigquery datasets` 2. Run the following code: ```py from datasets import Dataset from google.cloud import bigquery client = bigquery.Client() # Perform a query. QUERY = ( 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` ' 'WHERE state = "TX" ' 'LIMIT 100') query_job = client.query(QUERY) # API request rows = query_job.result() # Waits for query to finish ds = Dataset.from_generator(rows) for r in ds: print(r) ``` ### Expected behavior Two options: 1. Ignore the pickle errors when computing the hash 2. Provide a scape hutch so that we can avoid calculating the hash for the generator. For example, allowing to provide a hash from the user. ### Environment info python 3.9 google-cloud-bigquery 3.9.0 datasets 2.11.0
https://api.github.com/repos/huggingface/datasets
null
1,668,289,067
https://api.github.com/repos/huggingface/datasets/issues/5750/comments
I_kwDODunzps5jcBIr
null
5,750
https://api.github.com/repos/huggingface/datasets/issues/5750/events
false
closed
2023-04-14T10:48:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/5749
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5749/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5749/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/54584290?v=4", "events_url": "https://api.github.com/users/gulnaz-zh/events{/privacy}", "followers_url": "https://api.github.com/users/gulnaz-zh/followers", "following_url": "https://api.github.com/users/gulnaz-zh/following{/other_user}", "gists_url": "https://api.github.com/users/gulnaz-zh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gulnaz-zh", "id": 54584290, "login": "gulnaz-zh", "node_id": "MDQ6VXNlcjU0NTg0Mjkw", "organizations_url": "https://api.github.com/users/gulnaz-zh/orgs", "received_events_url": "https://api.github.com/users/gulnaz-zh/received_events", "repos_url": "https://api.github.com/users/gulnaz-zh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gulnaz-zh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gulnaz-zh/subscriptions", "type": "User", "url": "https://api.github.com/users/gulnaz-zh" }
https://github.com/huggingface/datasets/issues/5749
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-06-30T11:31:17Z
2023-04-18T12:57:08Z
null
[ "I got the same error, and the official website for visual genome is down. Did you solve this problem? ", "I am in the same situation now :( ", "Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.", "The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.", "Apart form data host server being down, there is an additional issue with the `datasets` library introduced by this PR:\r\n- #5238\r\n\r\nI am working to fix it.", "PR that fixes the AttributeError: https://huggingface.co/datasets/visual_genome/discussions/2", "For the issue with their data host server being down, I have opened a discussion in the \"Community\" tab of the Hub dataset: https://huggingface.co/datasets/visual_genome/discussions/3\r\nLet's continue the discussion there.", "The authors just replied to us with their new URL: https://homes.cs.washington.edu/~ranjay/visualgenome/\r\n\r\nWe have fixed the datasets loading script, which is operative again." ]
completed
[]
AttributeError: 'Version' object has no attribute 'match'
NONE
https://api.github.com/repos/huggingface/datasets/issues/5749/timeline
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') ### Expected behavior This is error trace: Downloading and preparing dataset visual_genome/region_descriptions_v1.2.0 to C:/Users/Acer/.cache/huggingface/datasets/visual_genome/region_descriptions_v1.2.0/1.2.0/136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3... --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 1 ----> 1 data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') File ~\.conda\envs\aai\Lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:964, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 962 split_dict = SplitDict(dataset_name=self.name) 963 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 964 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 966 # Checksums verification 967 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:377, in VisualGenome._split_generators(self, dl_manager) 375 def _split_generators(self, dl_manager): 376 # Download image meta datas. --> 377 image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url) 378 image_metadatas_file = os.path.join( 379 image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url) 380 ) 382 # Download annotations File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:328, in VisualGenomeConfig.image_metadata_url(self) 326 @property 327 def image_metadata_url(self): --> 328 if not self.version.match(_LATEST_VERSIONS["image_metadata"]): 329 logger.warning( 330 f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions." 331 ) 332 return f"{_BASE_ANNOTATION_URL}/image_data.json.zip" ### Environment info datasets 2.11.0 python 3.11.3
https://api.github.com/repos/huggingface/datasets
null
1,668,016,321
https://api.github.com/repos/huggingface/datasets/issues/5749/comments
I_kwDODunzps5ja-jB
null
5,749
https://api.github.com/repos/huggingface/datasets/issues/5749/events
false
open
2023-04-14T05:07:31Z
null
https://api.github.com/repos/huggingface/datasets/issues/5748
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5748/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5748/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ericxsun", "id": 1772912, "login": "ericxsun", "node_id": "MDQ6VXNlcjE3NzI5MTI=", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "repos_url": "https://api.github.com/users/ericxsun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "type": "User", "url": "https://api.github.com/users/ericxsun" }
https://github.com/huggingface/datasets/pull/5748
[]
false
2023-04-14T05:07:31Z
null
null
[]
null
[]
[BUG FIX] Issue 5739
NONE
https://api.github.com/repos/huggingface/datasets/issues/5748/timeline
A fix for https://github.com/huggingface/datasets/issues/5739
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5748.diff", "html_url": "https://github.com/huggingface/datasets/pull/5748", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5748" }
1,667,517,024
https://api.github.com/repos/huggingface/datasets/issues/5748/comments
PR_kwDODunzps5OSgNH
null
5,748
https://api.github.com/repos/huggingface/datasets/issues/5748/events
true
closed
2023-04-13T23:20:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5747
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5747/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5747/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/maddiedawson", "id": 106995444, "login": "maddiedawson", "node_id": "U_kgDOBmCe9A", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "repos_url": "https://api.github.com/users/maddiedawson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "type": "User", "url": "https://api.github.com/users/maddiedawson" }
https://github.com/huggingface/datasets/pull/5747
[]
false
2024-01-08T18:31:50Z
2024-01-08T18:31:50Z
null
[]
null
[]
[WIP] Add Dataset.to_spark
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5747/timeline
null
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5747.diff", "html_url": "https://github.com/huggingface/datasets/pull/5747", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5747" }
1,667,270,412
https://api.github.com/repos/huggingface/datasets/issues/5747/comments
PR_kwDODunzps5ORtBF
null
5,747
https://api.github.com/repos/huggingface/datasets/issues/5747/events
true
closed
2023-04-13T20:45:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/5746
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5746/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5746/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7485661?v=4", "events_url": "https://api.github.com/users/bbbxyz/events{/privacy}", "followers_url": "https://api.github.com/users/bbbxyz/followers", "following_url": "https://api.github.com/users/bbbxyz/following{/other_user}", "gists_url": "https://api.github.com/users/bbbxyz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bbbxyz", "id": 7485661, "login": "bbbxyz", "node_id": "MDQ6VXNlcjc0ODU2NjE=", "organizations_url": "https://api.github.com/users/bbbxyz/orgs", "received_events_url": "https://api.github.com/users/bbbxyz/received_events", "repos_url": "https://api.github.com/users/bbbxyz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bbbxyz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bbbxyz/subscriptions", "type": "User", "url": "https://api.github.com/users/bbbxyz" }
https://github.com/huggingface/datasets/pull/5746
[]
false
2023-04-14T13:15:38Z
2023-04-14T13:08:42Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006461 / 0.011353 (-0.004892) | 0.004671 / 0.011008 (-0.006337) | 0.097329 / 0.038508 (0.058821) | 0.028380 / 0.023109 (0.005270) | 0.369892 / 0.275898 (0.093994) | 0.398244 / 0.323480 (0.074764) | 0.004795 / 0.007986 (-0.003190) | 0.004866 / 0.004328 (0.000538) | 0.075060 / 0.004250 (0.070809) | 0.035678 / 0.037052 (-0.001374) | 0.372197 / 0.258489 (0.113708) | 0.407509 / 0.293841 (0.113668) | 0.031557 / 0.128546 (-0.096989) | 0.011608 / 0.075646 (-0.064038) | 0.325467 / 0.419271 (-0.093805) | 0.042590 / 0.043533 (-0.000943) | 0.373738 / 0.255139 (0.118599) | 0.395793 / 0.283200 (0.112593) | 0.082335 / 0.141683 (-0.059348) | 1.471582 / 1.452155 (0.019427) | 1.535834 / 1.492716 (0.043117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192432 / 0.018006 (0.174426) | 0.404423 / 0.000490 (0.403933) | 0.003252 / 0.000200 (0.003052) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025312 / 0.037411 (-0.012099) | 0.099964 / 0.014526 (0.085438) | 0.108779 / 0.176557 (-0.067777) | 0.170438 / 0.737135 (-0.566697) | 0.110116 / 0.296338 (-0.186223) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420402 / 0.215209 (0.205193) | 4.179142 / 2.077655 (2.101487) | 1.858114 / 1.504120 (0.353994) | 1.674452 / 1.541195 (0.133257) | 1.697839 / 1.468490 (0.229349) | 0.694707 / 4.584777 (-3.890070) | 3.394321 / 3.745712 (-0.351391) | 1.918437 / 5.269862 (-3.351425) | 1.277954 / 4.565676 (-3.287723) | 0.082357 / 0.424275 (-0.341918) | 0.012206 / 0.007607 (0.004598) | 0.522093 / 0.226044 (0.296049) | 5.239604 / 2.268929 (2.970675) | 2.347764 / 55.444624 (-53.096860) | 1.996864 / 6.876477 (-4.879613) | 2.050820 / 2.142072 (-0.091253) | 0.806110 / 4.805227 (-3.999118) | 0.151061 / 6.500664 (-6.349603) | 0.066438 / 0.075469 (-0.009031) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211233 / 1.841788 (-0.630554) | 14.054422 / 8.074308 (5.980114) | 14.110141 / 10.191392 (3.918749) | 0.129962 / 0.680424 (-0.550462) | 0.017271 / 0.534201 (-0.516930) | 0.386410 / 0.579283 (-0.192873) | 0.392648 / 0.434364 (-0.041716) | 0.444940 / 0.540337 (-0.095398) | 0.533535 / 1.386936 (-0.853401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006865 / 0.011353 (-0.004488) | 0.004662 / 0.011008 (-0.006346) | 0.077837 / 0.038508 (0.039329) | 0.028258 / 0.023109 (0.005149) | 0.346136 / 0.275898 (0.070238) | 0.380414 / 0.323480 (0.056934) | 0.005039 / 0.007986 (-0.002947) | 0.004967 / 0.004328 (0.000638) | 0.077774 / 0.004250 (0.073523) | 0.037504 / 0.037052 (0.000452) | 0.341550 / 0.258489 (0.083061) | 0.382494 / 0.293841 (0.088653) | 0.031881 / 0.128546 (-0.096665) | 0.011746 / 0.075646 (-0.063901) | 0.087087 / 0.419271 (-0.332185) | 0.043108 / 0.043533 (-0.000425) | 0.344103 / 0.255139 (0.088964) | 0.366613 / 0.283200 (0.083413) | 0.090399 / 0.141683 (-0.051284) | 1.492675 / 1.452155 (0.040520) | 1.588666 / 1.492716 (0.095950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191859 / 0.018006 (0.173853) | 0.412514 / 0.000490 (0.412025) | 0.001953 / 0.000200 (0.001753) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025159 / 0.037411 (-0.012252) | 0.100125 / 0.014526 (0.085599) | 0.106000 / 0.176557 (-0.070556) | 0.160710 / 0.737135 (-0.576425) | 0.110449 / 0.296338 (-0.185889) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436636 / 0.215209 (0.221427) | 4.364597 / 2.077655 (2.286942) | 2.077492 / 1.504120 (0.573372) | 1.868248 / 1.541195 (0.327053) | 1.911218 / 1.468490 (0.442728) | 0.700306 / 4.584777 (-3.884471) | 3.385428 / 3.745712 (-0.360284) | 2.965384 / 5.269862 (-2.304478) | 1.522093 / 4.565676 (-3.043583) | 0.082805 / 0.424275 (-0.341470) | 0.012432 / 0.007607 (0.004825) | 0.538478 / 0.226044 (0.312433) | 5.383207 / 2.268929 (3.114278) | 2.525177 / 55.444624 (-52.919447) | 2.179632 / 6.876477 (-4.696845) | 2.280768 / 2.142072 (0.138695) | 0.805869 / 4.805227 (-3.999358) | 0.152716 / 6.500664 (-6.347948) | 0.067848 / 0.075469 (-0.007621) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318899 / 1.841788 (-0.522889) | 14.416310 / 8.074308 (6.342002) | 14.172804 / 10.191392 (3.981412) | 0.141729 / 0.680424 (-0.538695) | 0.016785 / 0.534201 (-0.517416) | 0.378626 / 0.579283 (-0.200657) | 0.387153 / 0.434364 (-0.047211) | 0.439950 / 0.540337 (-0.100388) | 0.523958 / 1.386936 (-0.862978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7c3a9b057c476c40d157bd7a5d57f49066239df0 \"CML watermark\")\n" ]
null
[]
Fix link in docs
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5746/timeline
Fixes a broken link in the use_with_pytorch docs
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5746.diff", "html_url": "https://github.com/huggingface/datasets/pull/5746", "merged_at": "2023-04-14T13:08:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/5746.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5746" }
1,667,102,459
https://api.github.com/repos/huggingface/datasets/issues/5746/comments
PR_kwDODunzps5ORIUU
null
5,746
https://api.github.com/repos/huggingface/datasets/issues/5746/events
true
open
2023-04-13T20:29:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/5745
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keyboardAnt", "id": 15572698, "login": "keyboardAnt", "node_id": "MDQ6VXNlcjE1NTcyNjk4", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "type": "User", "url": "https://api.github.com/users/keyboardAnt" }
https://github.com/huggingface/datasets/pull/5745
[]
false
2023-04-21T15:22:43Z
null
null
[ "Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.", "Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only passes it to pandas if the user passes it to `load_dataset`.\r\n\r\nYou should better:\r\n- Either \"take steps to stop the use of 'mangle_dupe_cols'\" (as it was suggested in the deprecation warning in pandas-1.5.3)\r\n- Or pin pandas (< 2.0.0) in your local requirements file\r\n\r\nPlease note that from `datasets` library, we don't want to force users to use a specific pandas version. We would like to support users as well:\r\n- that use pandas < 1.5.3\r\n- that use pandas >= 2.0.0 and that do not pass the 'mangle_dupe_cols' parameter", "`datasets` 2.11 doesn't pass `mangle_dupe_cols` unless the user specifies it indeed, so I think we're fine" ]
null
[]
[BUG FIX] Issue 5744
NONE
https://api.github.com/repos/huggingface/datasets/issues/5745/timeline
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5745.diff", "html_url": "https://github.com/huggingface/datasets/pull/5745", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5745" }
1,667,086,143
https://api.github.com/repos/huggingface/datasets/issues/5745/comments
PR_kwDODunzps5ORE2n
null
5,745
https://api.github.com/repos/huggingface/datasets/issues/5745/events
true
closed
2023-04-13T20:21:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/5744
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5744/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5744/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4", "events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}", "followers_url": "https://api.github.com/users/keyboardAnt/followers", "following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}", "gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/keyboardAnt", "id": 15572698, "login": "keyboardAnt", "node_id": "MDQ6VXNlcjE1NTcyNjk4", "organizations_url": "https://api.github.com/users/keyboardAnt/orgs", "received_events_url": "https://api.github.com/users/keyboardAnt/received_events", "repos_url": "https://api.github.com/users/keyboardAnt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions", "type": "User", "url": "https://api.github.com/users/keyboardAnt" }
https://github.com/huggingface/datasets/issues/5744
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-07-06T17:01:59Z
2023-07-06T17:01:59Z
null
[ "Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?", "This has been fixed in `datasets` 2.11" ]
completed
[]
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
NONE
https://api.github.com/repos/huggingface/datasets/issues/5744/timeline
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`. For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745 --- * The FutureWarning mentioned above: ``` FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' ```
https://api.github.com/repos/huggingface/datasets
null
1,667,076,620
https://api.github.com/repos/huggingface/datasets/issues/5744/comments
I_kwDODunzps5jXZIM
null
5,744
https://api.github.com/repos/huggingface/datasets/issues/5744/events
false
closed
2023-04-13T17:28:33Z
null
https://api.github.com/repos/huggingface/datasets/issues/5743
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5743/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5743/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/71216295?v=4", "events_url": "https://api.github.com/users/syedabdullahhassan/events{/privacy}", "followers_url": "https://api.github.com/users/syedabdullahhassan/followers", "following_url": "https://api.github.com/users/syedabdullahhassan/following{/other_user}", "gists_url": "https://api.github.com/users/syedabdullahhassan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/syedabdullahhassan", "id": 71216295, "login": "syedabdullahhassan", "node_id": "MDQ6VXNlcjcxMjE2Mjk1", "organizations_url": "https://api.github.com/users/syedabdullahhassan/orgs", "received_events_url": "https://api.github.com/users/syedabdullahhassan/received_events", "repos_url": "https://api.github.com/users/syedabdullahhassan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/syedabdullahhassan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/syedabdullahhassan/subscriptions", "type": "User", "url": "https://api.github.com/users/syedabdullahhassan" }
https://github.com/huggingface/datasets/issues/5743
[]
false
2023-04-17T12:23:18Z
2023-04-17T12:23:18Z
null
[ "We no longer depend on `dataclasses` (for almost a year), so I don't think our package is the problematic one. \r\n\r\nI think it makes more sense to raise this issue in the `dataclasses` repo: https://github.com/ericvsmith/dataclasses." ]
completed
[]
dataclass.py in virtual environment is overriding the stdlib module "dataclasses"
NONE
https://api.github.com/repos/huggingface/datasets/issues/5743/timeline
### Describe the bug "e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses" ### Steps to reproduce the bug module issue ### Expected behavior overriding the stdlib module "dataclasses" ### Environment info VS code
https://api.github.com/repos/huggingface/datasets
null
1,666,843,832
https://api.github.com/repos/huggingface/datasets/issues/5743/comments
I_kwDODunzps5jWgS4
null
5,743
https://api.github.com/repos/huggingface/datasets/issues/5743/events
false
closed
2023-04-13T11:10:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/5742
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5742/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5742/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amyeroberts", "id": 22614925, "login": "amyeroberts", "node_id": "MDQ6VXNlcjIyNjE0OTI1", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "repos_url": "https://api.github.com/users/amyeroberts/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "type": "User", "url": "https://api.github.com/users/amyeroberts" }
https://github.com/huggingface/datasets/pull/5742
[]
false
2023-04-21T13:18:14Z
2023-04-21T13:11:09Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004586 / 0.011008 (-0.006422) | 0.097238 / 0.038508 (0.058730) | 0.027912 / 0.023109 (0.004802) | 0.347339 / 0.275898 (0.071441) | 0.393847 / 0.323480 (0.070368) | 0.005105 / 0.007986 (-0.002880) | 0.004750 / 0.004328 (0.000422) | 0.074671 / 0.004250 (0.070421) | 0.037912 / 0.037052 (0.000860) | 0.368973 / 0.258489 (0.110483) | 0.403983 / 0.293841 (0.110142) | 0.030817 / 0.128546 (-0.097730) | 0.011813 / 0.075646 (-0.063833) | 0.324470 / 0.419271 (-0.094802) | 0.044232 / 0.043533 (0.000699) | 0.347623 / 0.255139 (0.092484) | 0.382458 / 0.283200 (0.099259) | 0.086603 / 0.141683 (-0.055080) | 1.485778 / 1.452155 (0.033623) | 1.549776 / 1.492716 (0.057059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200154 / 0.018006 (0.182147) | 0.440645 / 0.000490 (0.440155) | 0.003664 / 0.000200 (0.003464) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023635 / 0.037411 (-0.013776) | 0.094969 / 0.014526 (0.080443) | 0.103630 / 0.176557 (-0.072927) | 0.168655 / 0.737135 (-0.568480) | 0.105850 / 0.296338 (-0.190488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425224 / 0.215209 (0.210015) | 4.236618 / 2.077655 (2.158963) | 1.917091 / 1.504120 (0.412971) | 1.746984 / 1.541195 (0.205789) | 1.817766 / 1.468490 (0.349276) | 0.700989 / 4.584777 (-3.883788) | 3.412577 / 3.745712 (-0.333135) | 3.049311 / 5.269862 (-2.220551) | 1.607692 / 4.565676 (-2.957984) | 0.083410 / 0.424275 (-0.340865) | 0.012601 / 0.007607 (0.004994) | 0.528244 / 0.226044 (0.302200) | 5.284134 / 2.268929 (3.015206) | 2.391885 / 55.444624 (-53.052740) | 2.020018 / 6.876477 (-4.856459) | 2.105908 / 2.142072 (-0.036164) | 0.801262 / 4.805227 (-4.003965) | 0.151467 / 6.500664 (-6.349197) | 0.066529 / 0.075469 (-0.008940) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203894 / 1.841788 (-0.637894) | 13.827561 / 8.074308 (5.753253) | 14.136730 / 10.191392 (3.945338) | 0.143829 / 0.680424 (-0.536595) | 0.016410 / 0.534201 (-0.517791) | 0.378194 / 0.579283 (-0.201089) | 0.391235 / 0.434364 (-0.043129) | 0.439261 / 0.540337 (-0.101076) | 0.527181 / 1.386936 (-0.859755) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006639 / 0.011353 (-0.004714) | 0.004469 / 0.011008 (-0.006540) | 0.076495 / 0.038508 (0.037987) | 0.027880 / 0.023109 (0.004771) | 0.342807 / 0.275898 (0.066909) | 0.374258 / 0.323480 (0.050778) | 0.005543 / 0.007986 (-0.002443) | 0.003362 / 0.004328 (-0.000966) | 0.075064 / 0.004250 (0.070813) | 0.039209 / 0.037052 (0.002156) | 0.342490 / 0.258489 (0.084001) | 0.382135 / 0.293841 (0.088294) | 0.030356 / 0.128546 (-0.098191) | 0.011762 / 0.075646 (-0.063884) | 0.086031 / 0.419271 (-0.333241) | 0.041991 / 0.043533 (-0.001542) | 0.340323 / 0.255139 (0.085184) | 0.364160 / 0.283200 (0.080961) | 0.088483 / 0.141683 (-0.053200) | 1.502836 / 1.452155 (0.050681) | 1.570438 / 1.492716 (0.077722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218486 / 0.018006 (0.200480) | 0.405251 / 0.000490 (0.404761) | 0.000398 / 0.000200 (0.000198) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025738 / 0.037411 (-0.011673) | 0.100390 / 0.014526 (0.085864) | 0.109913 / 0.176557 (-0.066644) | 0.161310 / 0.737135 (-0.575826) | 0.113269 / 0.296338 (-0.183069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438083 / 0.215209 (0.222874) | 4.377742 / 2.077655 (2.300087) | 2.069949 / 1.504120 (0.565829) | 1.857807 / 1.541195 (0.316613) | 1.881315 / 1.468490 (0.412825) | 0.695373 / 4.584777 (-3.889404) | 3.440287 / 3.745712 (-0.305425) | 1.842888 / 5.269862 (-3.426973) | 1.146655 / 4.565676 (-3.419022) | 0.083386 / 0.424275 (-0.340889) | 0.012290 / 0.007607 (0.004683) | 0.545672 / 0.226044 (0.319628) | 5.469568 / 2.268929 (3.200639) | 2.511886 / 55.444624 (-52.932739) | 2.184210 / 6.876477 (-4.692267) | 2.329822 / 2.142072 (0.187749) | 0.804114 / 4.805227 (-4.001114) | 0.151651 / 6.500664 (-6.349013) | 0.067269 / 0.075469 (-0.008200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272564 / 1.841788 (-0.569223) | 14.180708 / 8.074308 (6.106400) | 14.181657 / 10.191392 (3.990265) | 0.131443 / 0.680424 (-0.548981) | 0.016513 / 0.534201 (-0.517688) | 0.383786 / 0.579283 (-0.195497) | 0.397678 / 0.434364 (-0.036686) | 0.447003 / 0.540337 (-0.093334) | 0.539453 / 1.386936 (-0.847483) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#649d5a3315f9e7666713b6affe318ee00c7163a0 \"CML watermark\")\n" ]
null
[]
Warning specifying future change in to_tf_dataset behaviour
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5742/timeline
Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5742.diff", "html_url": "https://github.com/huggingface/datasets/pull/5742", "merged_at": "2023-04-21T13:11:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5742.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5742" }
1,666,209,738
https://api.github.com/repos/huggingface/datasets/issues/5742/comments
PR_kwDODunzps5OOH-W
null
5,742
https://api.github.com/repos/huggingface/datasets/issues/5742/events
true
closed
2023-04-13T07:17:02Z
null
https://api.github.com/repos/huggingface/datasets/issues/5741
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5741/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5741/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5741
[]
false
2023-04-13T09:48:10Z
2023-04-13T09:40:50Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007448 / 0.011353 (-0.003905) | 0.005182 / 0.011008 (-0.005826) | 0.098718 / 0.038508 (0.060210) | 0.034594 / 0.023109 (0.011485) | 0.317301 / 0.275898 (0.041403) | 0.357800 / 0.323480 (0.034320) | 0.005860 / 0.007986 (-0.002126) | 0.004267 / 0.004328 (-0.000061) | 0.074876 / 0.004250 (0.070626) | 0.048002 / 0.037052 (0.010950) | 0.333360 / 0.258489 (0.074871) | 0.362080 / 0.293841 (0.068239) | 0.035957 / 0.128546 (-0.092589) | 0.012245 / 0.075646 (-0.063401) | 0.332970 / 0.419271 (-0.086301) | 0.050825 / 0.043533 (0.007293) | 0.313936 / 0.255139 (0.058797) | 0.340684 / 0.283200 (0.057485) | 0.106630 / 0.141683 (-0.035053) | 1.427898 / 1.452155 (-0.024257) | 1.547518 / 1.492716 (0.054801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296952 / 0.018006 (0.278945) | 0.515708 / 0.000490 (0.515218) | 0.004225 / 0.000200 (0.004025) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029365 / 0.037411 (-0.008046) | 0.111142 / 0.014526 (0.096616) | 0.124414 / 0.176557 (-0.052142) | 0.185227 / 0.737135 (-0.551908) | 0.129545 / 0.296338 (-0.166793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403303 / 0.215209 (0.188094) | 4.044138 / 2.077655 (1.966483) | 1.803622 / 1.504120 (0.299502) | 1.615436 / 1.541195 (0.074242) | 1.703576 / 1.468490 (0.235086) | 0.706398 / 4.584777 (-3.878379) | 3.912995 / 3.745712 (0.167283) | 4.004575 / 5.269862 (-1.265287) | 2.101592 / 4.565676 (-2.464085) | 0.087280 / 0.424275 (-0.336995) | 0.012564 / 0.007607 (0.004957) | 0.508484 / 0.226044 (0.282440) | 5.089351 / 2.268929 (2.820422) | 2.269022 / 55.444624 (-53.175602) | 1.933375 / 6.876477 (-4.943102) | 2.136783 / 2.142072 (-0.005289) | 0.862624 / 4.805227 (-3.942603) | 0.172107 / 6.500664 (-6.328557) | 0.066694 / 0.075469 (-0.008775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172513 / 1.841788 (-0.669275) | 15.877519 / 8.074308 (7.803211) | 14.687476 / 10.191392 (4.496084) | 0.189392 / 0.680424 (-0.491032) | 0.017334 / 0.534201 (-0.516866) | 0.420201 / 0.579283 (-0.159082) | 0.418502 / 0.434364 (-0.015862) | 0.489130 / 0.540337 (-0.051207) | 0.580678 / 1.386936 (-0.806258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007942 / 0.011353 (-0.003411) | 0.005312 / 0.011008 (-0.005696) | 0.074684 / 0.038508 (0.036176) | 0.035952 / 0.023109 (0.012843) | 0.349672 / 0.275898 (0.073774) | 0.377157 / 0.323480 (0.053678) | 0.006399 / 0.007986 (-0.001586) | 0.005769 / 0.004328 (0.001441) | 0.074283 / 0.004250 (0.070032) | 0.053217 / 0.037052 (0.016165) | 0.342545 / 0.258489 (0.084056) | 0.383663 / 0.293841 (0.089822) | 0.037234 / 0.128546 (-0.091312) | 0.012349 / 0.075646 (-0.063298) | 0.086522 / 0.419271 (-0.332749) | 0.049888 / 0.043533 (0.006355) | 0.337686 / 0.255139 (0.082547) | 0.361564 / 0.283200 (0.078365) | 0.104902 / 0.141683 (-0.036781) | 1.478259 / 1.452155 (0.026104) | 1.576376 / 1.492716 (0.083660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.339760 / 0.018006 (0.321753) | 0.530946 / 0.000490 (0.530456) | 0.000474 / 0.000200 (0.000274) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029685 / 0.037411 (-0.007726) | 0.109409 / 0.014526 (0.094883) | 0.125579 / 0.176557 (-0.050978) | 0.175378 / 0.737135 (-0.561757) | 0.130672 / 0.296338 (-0.165667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428456 / 0.215209 (0.213247) | 4.238731 / 2.077655 (2.161077) | 2.046703 / 1.504120 (0.542583) | 1.850701 / 1.541195 (0.309506) | 1.909290 / 1.468490 (0.440800) | 0.714314 / 4.584777 (-3.870463) | 3.816056 / 3.745712 (0.070344) | 2.118567 / 5.269862 (-3.151295) | 1.348017 / 4.565676 (-3.217659) | 0.087140 / 0.424275 (-0.337135) | 0.012546 / 0.007607 (0.004938) | 0.538041 / 0.226044 (0.311997) | 5.381822 / 2.268929 (3.112893) | 2.525685 / 55.444624 (-52.918939) | 2.178659 / 6.876477 (-4.697817) | 2.381054 / 2.142072 (0.238981) | 0.844404 / 4.805227 (-3.960823) | 0.171802 / 6.500664 (-6.328862) | 0.065630 / 0.075469 (-0.009839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262187 / 1.841788 (-0.579600) | 16.197668 / 8.074308 (8.123360) | 15.148636 / 10.191392 (4.957244) | 0.152601 / 0.680424 (-0.527823) | 0.020238 / 0.534201 (-0.513963) | 0.420141 / 0.579283 (-0.159142) | 0.416295 / 0.434364 (-0.018068) | 0.487051 / 0.540337 (-0.053286) | 0.581942 / 1.386936 (-0.804994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9615e5af75b190c4e7b66792f9ba444f352765a0 \"CML watermark\")\n" ]
null
[]
Fix CI warnings
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5741/timeline
Fix warnings in our CI tests.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5741.diff", "html_url": "https://github.com/huggingface/datasets/pull/5741", "merged_at": "2023-04-13T09:40:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/5741.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5741" }
1,665,860,919
https://api.github.com/repos/huggingface/datasets/issues/5741/comments
PR_kwDODunzps5OM9nZ
null
5,741
https://api.github.com/repos/huggingface/datasets/issues/5741/events
true
closed
2023-04-12T08:52:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/5740
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5740/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5740/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5740
[]
false
2023-04-13T11:01:24Z
2023-04-13T10:54:13Z
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004854 / 0.011008 (-0.006154) | 0.096982 / 0.038508 (0.058474) | 0.033218 / 0.023109 (0.010109) | 0.314088 / 0.275898 (0.038190) | 0.351315 / 0.323480 (0.027835) | 0.005679 / 0.007986 (-0.002307) | 0.005404 / 0.004328 (0.001075) | 0.071773 / 0.004250 (0.067522) | 0.044593 / 0.037052 (0.007540) | 0.323643 / 0.258489 (0.065154) | 0.357172 / 0.293841 (0.063331) | 0.036782 / 0.128546 (-0.091764) | 0.012146 / 0.075646 (-0.063501) | 0.334874 / 0.419271 (-0.084397) | 0.051475 / 0.043533 (0.007942) | 0.305949 / 0.255139 (0.050810) | 0.339326 / 0.283200 (0.056126) | 0.101509 / 0.141683 (-0.040174) | 1.458254 / 1.452155 (0.006099) | 1.535252 / 1.492716 (0.042535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264837 / 0.018006 (0.246831) | 0.441444 / 0.000490 (0.440955) | 0.003331 / 0.000200 (0.003131) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026529 / 0.037411 (-0.010882) | 0.105924 / 0.014526 (0.091398) | 0.117191 / 0.176557 (-0.059365) | 0.176606 / 0.737135 (-0.560529) | 0.123452 / 0.296338 (-0.172887) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412351 / 0.215209 (0.197142) | 4.135468 / 2.077655 (2.057813) | 1.912820 / 1.504120 (0.408700) | 1.738993 / 1.541195 (0.197798) | 1.754228 / 1.468490 (0.285738) | 0.692239 / 4.584777 (-3.892538) | 3.765672 / 3.745712 (0.019959) | 2.081141 / 5.269862 (-3.188720) | 1.425153 / 4.565676 (-3.140523) | 0.085055 / 0.424275 (-0.339220) | 0.011918 / 0.007607 (0.004311) | 0.517573 / 0.226044 (0.291529) | 5.179809 / 2.268929 (2.910881) | 2.471620 / 55.444624 (-52.973005) | 2.140634 / 6.876477 (-4.735843) | 2.200150 / 2.142072 (0.058077) | 0.831662 / 4.805227 (-3.973566) | 0.168828 / 6.500664 (-6.331836) | 0.062755 / 0.075469 (-0.012714) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196890 / 1.841788 (-0.644898) | 14.826423 / 8.074308 (6.752114) | 14.020782 / 10.191392 (3.829390) | 0.161275 / 0.680424 (-0.519149) | 0.017467 / 0.534201 (-0.516734) | 0.422278 / 0.579283 (-0.157005) | 0.424053 / 0.434364 (-0.010311) | 0.490768 / 0.540337 (-0.049570) | 0.584490 / 1.386936 (-0.802446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007102 / 0.011353 (-0.004250) | 0.005145 / 0.011008 (-0.005863) | 0.073823 / 0.038508 (0.035315) | 0.032947 / 0.023109 (0.009838) | 0.336978 / 0.275898 (0.061080) | 0.368961 / 0.323480 (0.045481) | 0.006052 / 0.007986 (-0.001934) | 0.003970 / 0.004328 (-0.000358) | 0.072925 / 0.004250 (0.068674) | 0.044502 / 0.037052 (0.007450) | 0.340849 / 0.258489 (0.082360) | 0.381487 / 0.293841 (0.087646) | 0.037207 / 0.128546 (-0.091339) | 0.012095 / 0.075646 (-0.063551) | 0.085206 / 0.419271 (-0.334065) | 0.056236 / 0.043533 (0.012703) | 0.334048 / 0.255139 (0.078909) | 0.360442 / 0.283200 (0.077242) | 0.104402 / 0.141683 (-0.037281) | 1.446907 / 1.452155 (-0.005248) | 1.542430 / 1.492716 (0.049713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238720 / 0.018006 (0.220714) | 0.445857 / 0.000490 (0.445367) | 0.009280 / 0.000200 (0.009080) | 0.000150 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028414 / 0.037411 (-0.008998) | 0.110506 / 0.014526 (0.095981) | 0.124593 / 0.176557 (-0.051964) | 0.170951 / 0.737135 (-0.566184) | 0.128033 / 0.296338 (-0.168305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426206 / 0.215209 (0.210997) | 4.267289 / 2.077655 (2.189634) | 2.026880 / 1.504120 (0.522760) | 1.844052 / 1.541195 (0.302858) | 1.897697 / 1.468490 (0.429207) | 0.713545 / 4.584777 (-3.871232) | 3.815052 / 3.745712 (0.069339) | 3.217091 / 5.269862 (-2.052770) | 1.790546 / 4.565676 (-2.775130) | 0.087501 / 0.424275 (-0.336774) | 0.012136 / 0.007607 (0.004529) | 0.534495 / 0.226044 (0.308451) | 5.325913 / 2.268929 (3.056984) | 2.484309 / 55.444624 (-52.960315) | 2.149721 / 6.876477 (-4.726756) | 2.158764 / 2.142072 (0.016692) | 0.855273 / 4.805227 (-3.949954) | 0.170374 / 6.500664 (-6.330290) | 0.064053 / 0.075469 (-0.011416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253171 / 1.841788 (-0.588617) | 15.254562 / 8.074308 (7.180254) | 14.242119 / 10.191392 (4.050727) | 0.159298 / 0.680424 (-0.521126) | 0.017504 / 0.534201 (-0.516696) | 0.419710 / 0.579283 (-0.159574) | 0.417879 / 0.434364 (-0.016485) | 0.486328 / 0.540337 (-0.054009) | 0.578933 / 1.386936 (-0.808003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc38663c8e2c2b0b246791c3ed8bddbff163dd64 \"CML watermark\")\n", "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008476 / 0.011353 (-0.002877) | 0.005745 / 0.011008 (-0.005263) | 0.115307 / 0.038508 (0.076799) | 0.039356 / 0.023109 (0.016247) | 0.367155 / 0.275898 (0.091257) | 0.422147 / 0.323480 (0.098667) | 0.006817 / 0.007986 (-0.001168) | 0.004652 / 0.004328 (0.000323) | 0.084045 / 0.004250 (0.079795) | 0.055483 / 0.037052 (0.018431) | 0.364249 / 0.258489 (0.105760) | 0.415975 / 0.293841 (0.122134) | 0.041322 / 0.128546 (-0.087224) | 0.014178 / 0.075646 (-0.061469) | 0.392658 / 0.419271 (-0.026614) | 0.060156 / 0.043533 (0.016623) | 0.373938 / 0.255139 (0.118799) | 0.397494 / 0.283200 (0.114294) | 0.113811 / 0.141683 (-0.027872) | 1.688581 / 1.452155 (0.236427) | 1.790374 / 1.492716 (0.297658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222203 / 0.018006 (0.204196) | 0.471109 / 0.000490 (0.470619) | 0.007071 / 0.000200 (0.006871) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032112 / 0.037411 (-0.005299) | 0.118726 / 0.014526 (0.104200) | 0.134918 / 0.176557 (-0.041639) | 0.207766 / 0.737135 (-0.529369) | 0.139756 / 0.296338 (-0.156582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479858 / 0.215209 (0.264649) | 4.798428 / 2.077655 (2.720773) | 2.221573 / 1.504120 (0.717453) | 1.964956 / 1.541195 (0.423761) | 2.021763 / 1.468490 (0.553273) | 0.820401 / 4.584777 (-3.764376) | 4.533887 / 3.745712 (0.788175) | 4.121332 / 5.269862 (-1.148529) | 2.195807 / 4.565676 (-2.369869) | 0.103133 / 0.424275 (-0.321142) | 0.014620 / 0.007607 (0.007013) | 0.605012 / 0.226044 (0.378967) | 5.966623 / 2.268929 (3.697694) | 2.844118 / 55.444624 (-52.600506) | 2.463569 / 6.876477 (-4.412907) | 2.597177 / 2.142072 (0.455105) | 0.983201 / 4.805227 (-3.822026) | 0.199500 / 6.500664 (-6.301164) | 0.078387 / 0.075469 (0.002918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.401083 / 1.841788 (-0.440705) | 17.258725 / 8.074308 (9.184417) | 16.825992 / 10.191392 (6.634600) | 0.216762 / 0.680424 (-0.463662) | 0.021135 / 0.534201 (-0.513066) | 0.513688 / 0.579283 (-0.065595) | 0.488892 / 0.434364 (0.054529) | 0.566745 / 0.540337 (0.026408) | 0.688958 / 1.386936 (-0.697978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007948 / 0.011353 (-0.003405) | 0.005981 / 0.011008 (-0.005027) | 0.084474 / 0.038508 (0.045966) | 0.037952 / 0.023109 (0.014843) | 0.383359 / 0.275898 (0.107461) | 0.409324 / 0.323480 (0.085844) | 0.006641 / 0.007986 (-0.001344) | 0.004785 / 0.004328 (0.000456) | 0.083214 / 0.004250 (0.078964) | 0.053177 / 0.037052 (0.016125) | 0.393147 / 0.258489 (0.134658) | 0.438496 / 0.293841 (0.144655) | 0.042090 / 0.128546 (-0.086456) | 0.013373 / 0.075646 (-0.062273) | 0.097585 / 0.419271 (-0.321686) | 0.056359 / 0.043533 (0.012826) | 0.378113 / 0.255139 (0.122974) | 0.403874 / 0.283200 (0.120674) | 0.123503 / 0.141683 (-0.018180) | 1.639557 / 1.452155 (0.187403) | 1.759787 / 1.492716 (0.267071) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242534 / 0.018006 (0.224528) | 0.459040 / 0.000490 (0.458550) | 0.000454 / 0.000200 (0.000254) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031747 / 0.037411 (-0.005664) | 0.125823 / 0.014526 (0.111297) | 0.138985 / 0.176557 (-0.037571) | 0.194371 / 0.737135 (-0.542764) | 0.148905 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508201 / 0.215209 (0.292992) | 5.007519 / 2.077655 (2.929865) | 2.412956 / 1.504120 (0.908836) | 2.143378 / 1.541195 (0.602183) | 2.192966 / 1.468490 (0.724476) | 0.828497 / 4.584777 (-3.756280) | 4.496457 / 3.745712 (0.750745) | 2.397546 / 5.269862 (-2.872315) | 1.522889 / 4.565676 (-3.042787) | 0.099904 / 0.424275 (-0.324371) | 0.014561 / 0.007607 (0.006954) | 0.627417 / 0.226044 (0.401373) | 6.296441 / 2.268929 (4.027512) | 2.962858 / 55.444624 (-52.481767) | 2.543083 / 6.876477 (-4.333394) | 2.711884 / 2.142072 (0.569811) | 0.997969 / 4.805227 (-3.807259) | 0.200283 / 6.500664 (-6.300382) | 0.075934 / 0.075469 (0.000465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541707 / 1.841788 (-0.300081) | 17.791559 / 8.074308 (9.717251) | 16.782877 / 10.191392 (6.591485) | 0.171954 / 0.680424 (-0.508470) | 0.020506 / 0.534201 (-0.513695) | 0.504189 / 0.579283 (-0.075094) | 0.501655 / 0.434364 (0.067291) | 0.583120 / 0.540337 (0.042782) | 0.694931 / 1.386936 (-0.692005) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53355f308f4ffb9b4071f5d420b5c6767799ef1c \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005057 / 0.011008 (-0.005951) | 0.099147 / 0.038508 (0.060639) | 0.035358 / 0.023109 (0.012249) | 0.303442 / 0.275898 (0.027544) | 0.336898 / 0.323480 (0.013418) | 0.006216 / 0.007986 (-0.001770) | 0.004085 / 0.004328 (-0.000244) | 0.074567 / 0.004250 (0.070317) | 0.050917 / 0.037052 (0.013865) | 0.301786 / 0.258489 (0.043297) | 0.341362 / 0.293841 (0.047521) | 0.037019 / 0.128546 (-0.091528) | 0.011977 / 0.075646 (-0.063669) | 0.334688 / 0.419271 (-0.084583) | 0.051326 / 0.043533 (0.007793) | 0.299878 / 0.255139 (0.044739) | 0.325571 / 0.283200 (0.042371) | 0.110744 / 0.141683 (-0.030939) | 1.480898 / 1.452155 (0.028743) | 1.566917 / 1.492716 (0.074201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253249 / 0.018006 (0.235242) | 0.558576 / 0.000490 (0.558086) | 0.003838 / 0.000200 (0.003638) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028731 / 0.037411 (-0.008681) | 0.110643 / 0.014526 (0.096117) | 0.119560 / 0.176557 (-0.056996) | 0.178010 / 0.737135 (-0.559126) | 0.130286 / 0.296338 (-0.166053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400190 / 0.215209 (0.184981) | 3.999326 / 2.077655 (1.921672) | 1.797332 / 1.504120 (0.293212) | 1.610808 / 1.541195 (0.069613) | 1.679949 / 1.468490 (0.211459) | 0.696539 / 4.584777 (-3.888238) | 3.784766 / 3.745712 (0.039054) | 2.205008 / 5.269862 (-3.064854) | 1.501697 / 4.565676 (-3.063979) | 0.085553 / 0.424275 (-0.338723) | 0.012223 / 0.007607 (0.004616) | 0.494858 / 0.226044 (0.268813) | 4.968535 / 2.268929 (2.699606) | 2.258759 / 55.444624 (-53.185865) | 1.926236 / 6.876477 (-4.950241) | 2.072155 / 2.142072 (-0.069917) | 0.838354 / 4.805227 (-3.966873) | 0.168810 / 6.500664 (-6.331854) | 0.064347 / 0.075469 (-0.011122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.166696 / 1.841788 (-0.675091) | 14.721287 / 8.074308 (6.646979) | 14.319272 / 10.191392 (4.127880) | 0.144534 / 0.680424 (-0.535890) | 0.017502 / 0.534201 (-0.516699) | 0.422682 / 0.579283 (-0.156601) | 0.424426 / 0.434364 (-0.009938) | 0.493561 / 0.540337 (-0.046777) | 0.586765 / 1.386936 (-0.800171) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003589) | 0.005516 / 0.011008 (-0.005492) | 0.074745 / 0.038508 (0.036237) | 0.034364 / 0.023109 (0.011255) | 0.344318 / 0.275898 (0.068420) | 0.374779 / 0.323480 (0.051299) | 0.005904 / 0.007986 (-0.002082) | 0.004323 / 0.004328 (-0.000005) | 0.073191 / 0.004250 (0.068941) | 0.051549 / 0.037052 (0.014496) | 0.341792 / 0.258489 (0.083303) | 0.387576 / 0.293841 (0.093735) | 0.037483 / 0.128546 (-0.091063) | 0.012410 / 0.075646 (-0.063237) | 0.086480 / 0.419271 (-0.332791) | 0.050035 / 0.043533 (0.006502) | 0.335475 / 0.255139 (0.080336) | 0.361436 / 0.283200 (0.078236) | 0.106890 / 0.141683 (-0.034792) | 1.464032 / 1.452155 (0.011877) | 1.563490 / 1.492716 (0.070774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268765 / 0.018006 (0.250758) | 0.563811 / 0.000490 (0.563321) | 0.004904 / 0.000200 (0.004704) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029885 / 0.037411 (-0.007526) | 0.113885 / 0.014526 (0.099359) | 0.124283 / 0.176557 (-0.052274) | 0.173619 / 0.737135 (-0.563517) | 0.131781 / 0.296338 (-0.164557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420296 / 0.215209 (0.205087) | 4.167656 / 2.077655 (2.090001) | 1.982356 / 1.504120 (0.478237) | 1.792181 / 1.541195 (0.250986) | 1.871459 / 1.468490 (0.402969) | 0.707066 / 4.584777 (-3.877711) | 3.835922 / 3.745712 (0.090210) | 3.506796 / 5.269862 (-1.763066) | 1.857172 / 4.565676 (-2.708505) | 0.086219 / 0.424275 (-0.338056) | 0.012404 / 0.007607 (0.004796) | 0.512393 / 0.226044 (0.286348) | 5.111623 / 2.268929 (2.842695) | 2.493523 / 55.444624 (-52.951101) | 2.188220 / 6.876477 (-4.688257) | 2.319096 / 2.142072 (0.177024) | 0.844084 / 4.805227 (-3.961144) | 0.171130 / 6.500664 (-6.329534) | 0.065913 / 0.075469 (-0.009556) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284768 / 1.841788 (-0.557020) | 15.334610 / 8.074308 (7.260301) | 14.724436 / 10.191392 (4.533044) | 0.188425 / 0.680424 (-0.491999) | 0.017984 / 0.534201 (-0.516217) | 0.428150 / 0.579283 (-0.151133) | 0.429013 / 0.434364 (-0.005351) | 0.500818 / 0.540337 (-0.039519) | 0.592879 / 1.386936 (-0.794057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ee68da958c2fab3a26d9f0efb1e207ecbcf7ce15 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006870 / 0.011353 (-0.004483) | 0.004702 / 0.011008 (-0.006306) | 0.099258 / 0.038508 (0.060750) | 0.029008 / 0.023109 (0.005899) | 0.330599 / 0.275898 (0.054701) | 0.361163 / 0.323480 (0.037683) | 0.005020 / 0.007986 (-0.002965) | 0.003474 / 0.004328 (-0.000855) | 0.075902 / 0.004250 (0.071651) | 0.037462 / 0.037052 (0.000410) | 0.336213 / 0.258489 (0.077724) | 0.370645 / 0.293841 (0.076804) | 0.032435 / 0.128546 (-0.096111) | 0.011686 / 0.075646 (-0.063960) | 0.326040 / 0.419271 (-0.093232) | 0.043750 / 0.043533 (0.000217) | 0.332629 / 0.255139 (0.077490) | 0.353302 / 0.283200 (0.070102) | 0.090421 / 0.141683 (-0.051262) | 1.470097 / 1.452155 (0.017942) | 1.544908 / 1.492716 (0.052191) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213418 / 0.018006 (0.195411) | 0.434808 / 0.000490 (0.434319) | 0.005949 / 0.000200 (0.005749) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023085 / 0.037411 (-0.014327) | 0.098222 / 0.014526 (0.083696) | 0.104543 / 0.176557 (-0.072013) | 0.165423 / 0.737135 (-0.571713) | 0.108732 / 0.296338 (-0.187606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433933 / 0.215209 (0.218724) | 4.334358 / 2.077655 (2.256704) | 2.013984 / 1.504120 (0.509864) | 1.862981 / 1.541195 (0.321787) | 1.873936 / 1.468490 (0.405446) | 0.699857 / 4.584777 (-3.884920) | 3.417815 / 3.745712 (-0.327897) | 1.946403 / 5.269862 (-3.323459) | 1.308683 / 4.565676 (-3.256994) | 0.083297 / 0.424275 (-0.340978) | 0.012610 / 0.007607 (0.005003) | 0.540877 / 0.226044 (0.314832) | 5.408293 / 2.268929 (3.139365) | 2.529574 / 55.444624 (-52.915050) | 2.201047 / 6.876477 (-4.675429) | 2.392966 / 2.142072 (0.250894) | 0.812719 / 4.805227 (-3.992509) | 0.154013 / 6.500664 (-6.346651) | 0.067614 / 0.075469 (-0.007855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228150 / 1.841788 (-0.613638) | 14.037090 / 8.074308 (5.962782) | 14.259416 / 10.191392 (4.068024) | 0.155554 / 0.680424 (-0.524870) | 0.016521 / 0.534201 (-0.517680) | 0.379615 / 0.579283 (-0.199668) | 0.421352 / 0.434364 (-0.013012) | 0.446512 / 0.540337 (-0.093825) | 0.531802 / 1.386936 (-0.855134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004432 / 0.011008 (-0.006577) | 0.076662 / 0.038508 (0.038154) | 0.027674 / 0.023109 (0.004565) | 0.341667 / 0.275898 (0.065769) | 0.376493 / 0.323480 (0.053014) | 0.005076 / 0.007986 (-0.002910) | 0.004655 / 0.004328 (0.000326) | 0.075698 / 0.004250 (0.071448) | 0.036905 / 0.037052 (-0.000147) | 0.342394 / 0.258489 (0.083905) | 0.383330 / 0.293841 (0.089489) | 0.031729 / 0.128546 (-0.096817) | 0.011582 / 0.075646 (-0.064064) | 0.085721 / 0.419271 (-0.333551) | 0.042012 / 0.043533 (-0.001521) | 0.342063 / 0.255139 (0.086924) | 0.367335 / 0.283200 (0.084136) | 0.089641 / 0.141683 (-0.052042) | 1.520353 / 1.452155 (0.068198) | 1.643653 / 1.492716 (0.150937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178995 / 0.018006 (0.160989) | 0.436544 / 0.000490 (0.436055) | 0.002311 / 0.000200 (0.002111) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025386 / 0.037411 (-0.012026) | 0.099717 / 0.014526 (0.085192) | 0.110809 / 0.176557 (-0.065747) | 0.162931 / 0.737135 (-0.574204) | 0.110430 / 0.296338 (-0.185909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438592 / 0.215209 (0.223382) | 4.372560 / 2.077655 (2.294905) | 2.069686 / 1.504120 (0.565567) | 1.860576 / 1.541195 (0.319382) | 1.898161 / 1.468490 (0.429671) | 0.698353 / 4.584777 (-3.886424) | 3.462440 / 3.745712 (-0.283272) | 1.868602 / 5.269862 (-3.401260) | 1.160498 / 4.565676 (-3.405179) | 0.082869 / 0.424275 (-0.341406) | 0.012690 / 0.007607 (0.005083) | 0.533278 / 0.226044 (0.307233) | 5.386214 / 2.268929 (3.117285) | 2.519243 / 55.444624 (-52.925382) | 2.171109 / 6.876477 (-4.705368) | 2.272617 / 2.142072 (0.130544) | 0.805843 / 4.805227 (-3.999384) | 0.152275 / 6.500664 (-6.348389) | 0.068038 / 0.075469 (-0.007431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291967 / 1.841788 (-0.549821) | 14.386474 / 8.074308 (6.312166) | 14.180693 / 10.191392 (3.989301) | 0.131714 / 0.680424 (-0.548710) | 0.016596 / 0.534201 (-0.517605) | 0.384293 / 0.579283 (-0.194990) | 0.404051 / 0.434364 (-0.030313) | 0.452167 / 0.540337 (-0.088170) | 0.542718 / 1.386936 (-0.844218) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f9c770bb1a43fa7fe390286d7535266d3964d067 \"CML watermark\")\n" ]
null
[]
Fix CI mock filesystem fixtures
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5740/timeline
This PR fixes the fixtures of our CI mock filesystems. Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, should have been deleted by the fixture. This PR fixes the mock filesystem fixtures, so that the "mock" filesystem is properly deleted from the inner `fsspec` registry. Tests were added to check the correct behavior of the mock filesystem fixtures. Related to: - #5733
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5740.diff", "html_url": "https://github.com/huggingface/datasets/pull/5740", "merged_at": "2023-04-13T10:54:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/5740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5740" }
1,664,132,130
https://api.github.com/repos/huggingface/datasets/issues/5740/comments
PR_kwDODunzps5OHI08
null
5,740
https://api.github.com/repos/huggingface/datasets/issues/5740/events
true
open
2023-04-12T04:51:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/5739
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5739/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5739/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4", "events_url": "https://api.github.com/users/ericxsun/events{/privacy}", "followers_url": "https://api.github.com/users/ericxsun/followers", "following_url": "https://api.github.com/users/ericxsun/following{/other_user}", "gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ericxsun", "id": 1772912, "login": "ericxsun", "node_id": "MDQ6VXNlcjE3NzI5MTI=", "organizations_url": "https://api.github.com/users/ericxsun/orgs", "received_events_url": "https://api.github.com/users/ericxsun/received_events", "repos_url": "https://api.github.com/users/ericxsun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions", "type": "User", "url": "https://api.github.com/users/ericxsun" }
https://github.com/huggingface/datasets/issues/5739
[]
false
2023-04-21T14:20:59Z
null
null
[ "Same problem.", "hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ", "> hi! I think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. @ericxsun Do you want to open a PR to fix the regex? As you already found the solution :)\r\n\r\nSure, please see https://github.com/huggingface/datasets/pull/5748 @polinaeterna ", "I think `string_to_dict` is ok, and that the issue is that it gets `'/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'` as input instead of `'data/test-00000-of-00001-9c49eeff30aacaa8.parquet'`. The path should be relative to the directory being loaded by `load_dataset`" ]
null
[]
weird result during dataset split when data path starts with `/data`
NONE
https://api.github.com/repos/huggingface/datasets/issues/5739/timeline
### Describe the bug The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158 will cause a weird result during dataset split when data path starts with `/data` ### Steps to reproduce the bug 1. clone dataset into local path ``` cd /data/train/raw/ git lfs clone https://huggingface.co/datasets/deepmind/code_contests.git ls /data/train/raw/code_contests # README.md data dataset_infos.json ls /data/train/raw/code_contests/data # test-00000-of-00001-9c49eeff30aacaa8.parquet # train-[0-9]+-of-[0-9]+-xx.parquet # valid-00000-of-00001-5e672c5751f060d3.parquet ``` 2. loading data from local ``` from datasets import load_dataset dataset = load_dataset('/data/train/raw/code_contests') FileNotFoundError: Unable to resolve any data file that matches '['data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*']' at /data/train/raw/code_contests with any supported extension ``` weird path `data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*` While dive deep into `LocalDatasetModuleFactoryWithoutScript` defined in [load.py](https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/load.py#L627) and _get_data_files_patterns https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/data_files.py#L228. I found the weird behavior caused by `string_to_dict` 3. check `string_to_dict` ``` p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*' string_to_dict(p, split_pattern) # {'split': 'train/raw/code_contests/data/test'} p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' string_to_dict(p, split_pattern) {'split': 'test'} ``` go deep into string_to_dict https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158. 4. test the regex: <img width="680" alt="image" src="https://user-images.githubusercontent.com/1772912/231351129-75179f01-fb9f-4f12-8fa9-0dfcc3d5f3bd.png"> <img width="679" alt="image" src="https://user-images.githubusercontent.com/1772912/231351025-009f3d83-2cf3-4e15-9ed4-6b9663dcb2ee.png"> ### Expected behavior statement in `steps to reproduce the bug` 3. check `string_to_dict` ``` p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*' string_to_dict(p, split_pattern) # {'split': 'train/raw/code_contests/data/test'} p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet' string_to_dict(p, split_pattern) {'split': 'test'} ``` ### Environment info - linux(debian) - python 3.7 - datasets 2.8.0
https://api.github.com/repos/huggingface/datasets
null
1,663,762,901
https://api.github.com/repos/huggingface/datasets/issues/5739/comments
I_kwDODunzps5jKwHV
null
5,739
https://api.github.com/repos/huggingface/datasets/issues/5739/events
false
closed
2023-04-12T01:07:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/5738
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5738/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5738/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4", "events_url": "https://api.github.com/users/Tylersuard/events{/privacy}", "followers_url": "https://api.github.com/users/Tylersuard/followers", "following_url": "https://api.github.com/users/Tylersuard/following{/other_user}", "gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tylersuard", "id": 41713505, "login": "Tylersuard", "node_id": "MDQ6VXNlcjQxNzEzNTA1", "organizations_url": "https://api.github.com/users/Tylersuard/orgs", "received_events_url": "https://api.github.com/users/Tylersuard/received_events", "repos_url": "https://api.github.com/users/Tylersuard/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions", "type": "User", "url": "https://api.github.com/users/Tylersuard" }
https://github.com/huggingface/datasets/issues/5738
[]
false
2023-04-19T12:08:27Z
2023-04-19T12:08:27Z
null
[ "You need to provide a text file as `data_files`, not as a configuration:\r\n\r\n```python\r\nmy_dataset = load_dataset(\"text\", data_files=\"TextFile.txt\")\r\n```\r\n\r\nOtherwise, since `data_files` is `None`, it picks up Colab's sample datasets from the `content` dir." ]
completed
[]
load_dataset("text","dataset.txt") loads the wrong dataset!
NONE
https://api.github.com/repos/huggingface/datasets/issues/5738/timeline
### Describe the bug I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in the world?? ### Steps to reproduce the bug my_dataset = load_dataset("text","TextFile.txt") my_dataset ### Expected behavior I expected the dataset to contain the actual data from the text document that I used. ### Environment info Google Colab
https://api.github.com/repos/huggingface/datasets
null
1,663,477,690
https://api.github.com/repos/huggingface/datasets/issues/5738/comments
I_kwDODunzps5jJqe6
null
5,738
https://api.github.com/repos/huggingface/datasets/issues/5738/events
false
closed
2023-04-11T17:14:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/5737
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5737/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5737/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10896776?v=4", "events_url": "https://api.github.com/users/mrcaelumn/events{/privacy}", "followers_url": "https://api.github.com/users/mrcaelumn/followers", "following_url": "https://api.github.com/users/mrcaelumn/following{/other_user}", "gists_url": "https://api.github.com/users/mrcaelumn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mrcaelumn", "id": 10896776, "login": "mrcaelumn", "node_id": "MDQ6VXNlcjEwODk2Nzc2", "organizations_url": "https://api.github.com/users/mrcaelumn/orgs", "received_events_url": "https://api.github.com/users/mrcaelumn/received_events", "repos_url": "https://api.github.com/users/mrcaelumn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mrcaelumn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrcaelumn/subscriptions", "type": "User", "url": "https://api.github.com/users/mrcaelumn" }
https://github.com/huggingface/datasets/issues/5737
[]
false
2023-04-13T16:49:57Z
2023-04-13T16:49:57Z
null
[ "Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassLabel(names=['label_1', 'label_2', 'label_3'], id=None)}\r\n```", "thank you @stevhliu, its worked. " ]
completed
[]
ClassLabel Error
NONE
https://api.github.com/repos/huggingface/datasets/issues/5737/timeline
### Describe the bug I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes ### Steps to reproduce the bug from datasets import ClassLabel, Dataset 1. Create the ClassLabel object with 3 label values and their corresponding names label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"]) 2. Define a dictionary with text and label fields data = { 'text': ['text_1', 'text_2', 'text_3'], 'label': [1, 2, 3], } 3. Create a Hugging Face dataset from the dictionary dataset = Dataset.from_dict(data) print(dataset.features) 4. Map the label values to their corresponding label names using the label object dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])}) 5. Print the resulting dataset print(dataset) ### Expected behavior I hope my label type is class label instead int. ### Environment info python 3.9 google colab
https://api.github.com/repos/huggingface/datasets
null
1,662,919,811
https://api.github.com/repos/huggingface/datasets/issues/5737/comments
I_kwDODunzps5jHiSD
null
5,737
https://api.github.com/repos/huggingface/datasets/issues/5737/events
false
open
2023-04-11T11:29:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/5736
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5736/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5736/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
https://github.com/huggingface/datasets/issues/5736
[]
false
2023-11-30T07:16:58Z
null
null
[ "Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?", "I have the same error with `datasets==2.14.5` and `pyarrow==13.0.0`. Python 3.10.13", "I have same error. Any workaround?" ]
null
[]
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
NONE
https://api.github.com/repos/huggingface/datasets/issues/5736/timeline
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.py` to generate and load an offline dataset. 2. Load it with ```python ds = datasets.load_dataset(path=/path/to/my_dataset.py, name='toy', data_dir=/path/to/my_dataset.py, cache_dir=cache_dir, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, ) ``` It loads fine ``` Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data. ``` 3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error ``` 2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json Traceback (most recent call last): File "<string>", line 2, in <module> File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset builder_instance.download_and_prepare( File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare with incomplete_dir(self._output_dir) as tmp_output_dir: File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir shutil.rmtree(dirname) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c' ``` ### Expected behavior Regenerate the dataset from scratch and reload it. ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.2
https://api.github.com/repos/huggingface/datasets
null
1,662,286,061
https://api.github.com/repos/huggingface/datasets/issues/5736/comments
I_kwDODunzps5jFHjt
null
5,736
https://api.github.com/repos/huggingface/datasets/issues/5736/events
false
closed
2023-04-11T10:02:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/5735
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5735/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5735/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hubert-Bonisseur", "id": 48770768, "login": "Hubert-Bonisseur", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "type": "User", "url": "https://api.github.com/users/Hubert-Bonisseur" }
https://github.com/huggingface/datasets/pull/5735
[]
false
2023-04-27T16:39:04Z
2023-04-27T16:32:09Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable", "Hi ! \r\nI just tested this out with the code below and it seems to be ok. Both datasets are alternating and we get all the examples with no duplicates.\r\n\r\nOn thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).\r\n\r\n ```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=1)\r\n\r\n ds_merged = interleave_datasets([ds1, ds2], stopping_strategy=\"all_exhausted\")\r\n\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v'}]\r\n1 [{'input': 'test: Works with RTL and N'}]\r\n2 [{'input': \"train: Great It's not fully\"}]\r\n3 [{'input': 'test: Works with RTL SDR W'}]\r\n4 [{'input': 'train: Works on a Nexus 6p '}]\r\n5 [{'input': 'test: Awsome App! Easy to '}]\r\n6 [{'input': 'train: The bandwidth seemed'}]\r\n7 [{'input': \"test: I'll forgo the refun\"}]\r\n8 [{'input': 'train: Works well with my H'}]\r\n9 [{'input': 'test: looks like a great p'}]\r\n```", "<s> Could you try with `num_workers>1` ? </s>\r\n\r\nedit: Oh I see\r\n\r\n> On thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).", "Great ! It's ok to have the max amount of workers is equal to the lowest amount of shard :)\r\n\r\nSo in the case of `num_workers>min(n_shards_per_dataset)` maybe some workers should turn off, and a warning can probably be shown. This is already the case if you use a single dataset with a single shard and `num_workers>1`.\r\n\r\n\r\nRight now it seems to raise an error:\r\n\r\n```python\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 979, in __iter__\r\n yield from self._iter_pytorch(ex_iterable)\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 912, in _iter_pytorch\r\n for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in shard_data_sources\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in <listcomp>\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 125, in shard_data_sources\r\n requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/utils/sharding.py\", line 76, in _merge_gen_kwargs\r\n for key in gen_kwargs_list[0]\r\nIndexError: list index out of range\r\n```", "Good point. I have fixed the n_shards property of merged iterable datasets so that this warning is raised properly", "Hey @lhoestq, what do you think of the last modifications ? ", "Hello! No problem :)\r\n\r\n- About HorizontallyConcatenatedMultiSourcesExamplesIterable, I've haven't been able to create a bug with sharding. So either I missed something or it's working somehow:\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets, concatenate_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].rename_columns({\"input\": \"input2\"})\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=3)\r\n\r\n ds_merged = concatenate_datasets([ds1, ds2], axis=1)\r\n\r\n #n_shards is always 1 for HorizontallyConcatenatedMultiSourcesExamplesIterable\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v', 'input2': 'test: Works with RTL and N'}]\r\n1 [{'input': \"train: Great It's not fully\", 'input2': 'test: Works with RTL SDR W'}]\r\n2 [{'input': 'train: Works on a Nexus 6p ', 'input2': 'test: Awsome App! Easy to '}]\r\n3 [{'input': 'train: The bandwidth seemed', 'input2': \"test: I'll forgo the refun\"}]\r\n4 [{'input': 'train: Works well with my H', 'input2': 'test: looks like a great p'}]\r\n```\r\n\r\n- I've added a test but I'm not completely happy with it. My issue is that multiprocessing makes interleaving not completely deterministic as samples are yielded whenever ready by each process, if I'm correct.\r\nAs a result I opted to check for the amount of samples yielded and make that they are all unique, which should be equivalent.\r\nBut now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nWhat are your thoughts about this ?", "Ah indeed it works because it's set to be only 1 shard - my bad :)", "> But now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nThis looks reasonable, maybe this can be documented in the `interleave_datasets` docstring ?\r\n```\r\nNote for iterable datasets:\r\n\r\nIn a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process.\r\nTherefore the \"first_exhausted\" strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).\r\n```", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006441 / 0.011353 (-0.004912) | 0.004551 / 0.011008 (-0.006457) | 0.099144 / 0.038508 (0.060636) | 0.028163 / 0.023109 (0.005054) | 0.386342 / 0.275898 (0.110444) | 0.398347 / 0.323480 (0.074867) | 0.004836 / 0.007986 (-0.003150) | 0.004724 / 0.004328 (0.000395) | 0.076277 / 0.004250 (0.072027) | 0.036305 / 0.037052 (-0.000747) | 0.377179 / 0.258489 (0.118690) | 0.410694 / 0.293841 (0.116853) | 0.030196 / 0.128546 (-0.098351) | 0.011436 / 0.075646 (-0.064211) | 0.325911 / 0.419271 (-0.093360) | 0.043709 / 0.043533 (0.000177) | 0.375801 / 0.255139 (0.120662) | 0.396511 / 0.283200 (0.113311) | 0.088346 / 0.141683 (-0.053337) | 1.483427 / 1.452155 (0.031272) | 1.553708 / 1.492716 (0.060992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190974 / 0.018006 (0.172968) | 0.451309 / 0.000490 (0.450819) | 0.004045 / 0.000200 (0.003845) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023814 / 0.037411 (-0.013597) | 0.096922 / 0.014526 (0.082396) | 0.101506 / 0.176557 (-0.075050) | 0.164694 / 0.737135 (-0.572441) | 0.106899 / 0.296338 (-0.189439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432164 / 0.215209 (0.216954) | 4.308076 / 2.077655 (2.230421) | 2.092434 / 1.504120 (0.588314) | 1.937405 / 1.541195 (0.396210) | 1.988030 / 1.468490 (0.519540) | 0.695476 / 4.584777 (-3.889301) | 3.436413 / 3.745712 (-0.309299) | 2.892954 / 5.269862 (-2.376908) | 1.519906 / 4.565676 (-3.045771) | 0.082579 / 0.424275 (-0.341696) | 0.012233 / 0.007607 (0.004626) | 0.531329 / 0.226044 (0.305284) | 5.365272 / 2.268929 (3.096344) | 2.391452 / 55.444624 (-53.053172) | 2.051116 / 6.876477 (-4.825361) | 2.140663 / 2.142072 (-0.001410) | 0.807262 / 4.805227 (-3.997966) | 0.151290 / 6.500664 (-6.349374) | 0.066137 / 0.075469 (-0.009333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193106 / 1.841788 (-0.648682) | 13.577240 / 8.074308 (5.502932) | 14.280126 / 10.191392 (4.088734) | 0.142538 / 0.680424 (-0.537886) | 0.016641 / 0.534201 (-0.517560) | 0.386318 / 0.579283 (-0.192965) | 0.385991 / 0.434364 (-0.048373) | 0.440712 / 0.540337 (-0.099625) | 0.524189 / 1.386936 (-0.862747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006628 / 0.011353 (-0.004725) | 0.004664 / 0.011008 (-0.006344) | 0.077254 / 0.038508 (0.038746) | 0.028369 / 0.023109 (0.005259) | 0.343076 / 0.275898 (0.067178) | 0.376491 / 0.323480 (0.053011) | 0.005298 / 0.007986 (-0.002687) | 0.004853 / 0.004328 (0.000524) | 0.075927 / 0.004250 (0.071677) | 0.039951 / 0.037052 (0.002899) | 0.346225 / 0.258489 (0.087736) | 0.382367 / 0.293841 (0.088526) | 0.031133 / 0.128546 (-0.097413) | 0.011666 / 0.075646 (-0.063981) | 0.086383 / 0.419271 (-0.332889) | 0.042885 / 0.043533 (-0.000647) | 0.343885 / 0.255139 (0.088746) | 0.366840 / 0.283200 (0.083640) | 0.095942 / 0.141683 (-0.045741) | 1.528972 / 1.452155 (0.076817) | 1.586392 / 1.492716 (0.093676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223952 / 0.018006 (0.205946) | 0.410767 / 0.000490 (0.410277) | 0.001014 / 0.000200 (0.000814) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024210 / 0.037411 (-0.013201) | 0.100308 / 0.014526 (0.085782) | 0.106899 / 0.176557 (-0.069658) | 0.156514 / 0.737135 (-0.580621) | 0.109548 / 0.296338 (-0.186790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434763 / 0.215209 (0.219554) | 4.348485 / 2.077655 (2.270831) | 2.064255 / 1.504120 (0.560135) | 1.864394 / 1.541195 (0.323199) | 1.899732 / 1.468490 (0.431242) | 0.694147 / 4.584777 (-3.890630) | 3.357898 / 3.745712 (-0.387815) | 2.909155 / 5.269862 (-2.360707) | 1.424790 / 4.565676 (-3.140886) | 0.082597 / 0.424275 (-0.341678) | 0.012442 / 0.007607 (0.004835) | 0.538758 / 0.226044 (0.312713) | 5.390288 / 2.268929 (3.121359) | 2.532016 / 55.444624 (-52.912609) | 2.185724 / 6.876477 (-4.690753) | 2.274176 / 2.142072 (0.132104) | 0.804785 / 4.805227 (-4.000442) | 0.152649 / 6.500664 (-6.348015) | 0.067707 / 0.075469 (-0.007762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285219 / 1.841788 (-0.556568) | 13.958098 / 8.074308 (5.883790) | 14.043653 / 10.191392 (3.852261) | 0.144526 / 0.680424 (-0.535898) | 0.016813 / 0.534201 (-0.517388) | 0.390286 / 0.579283 (-0.188997) | 0.389184 / 0.434364 (-0.045180) | 0.470810 / 0.540337 (-0.069527) | 0.562391 / 1.386936 (-0.824545) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bb172c9772858c188f85ffc9a51f8cb1da292a0 \"CML watermark\")\n" ]
null
[]
Implement sharding on merged iterable datasets
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5735/timeline
This PR allows sharding of merged iterable datasets. Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged. With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sharded sub-iterable, ensuring that there is no duplication of data. As a result it is now possible to set any amount of workers in the dataloader as long as it is lower or equal to the lowest amount of shards amongst the datasets. Before it had to be set to 0. I previously talked about this issue on the forum [here](https://discuss.huggingface.co/t/interleaving-iterable-dataset-with-num-workers-0/35801)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5735.diff", "html_url": "https://github.com/huggingface/datasets/pull/5735", "merged_at": "2023-04-27T16:32:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/5735.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5735" }
1,662,150,903
https://api.github.com/repos/huggingface/datasets/issues/5735/comments
PR_kwDODunzps5OAY3A
null
5,735
https://api.github.com/repos/huggingface/datasets/issues/5735/events
true
closed
2023-04-11T09:04:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/5734
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5734/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5734/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5734
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-11T11:04:52Z
2023-04-11T11:04:52Z
null
[]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Remove temporary pin of fsspec
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5734/timeline
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
https://api.github.com/repos/huggingface/datasets
null
1,662,058,028
https://api.github.com/repos/huggingface/datasets/issues/5734/comments
I_kwDODunzps5jEP4s
null
5,734
https://api.github.com/repos/huggingface/datasets/issues/5734/events
false
closed
2023-04-11T08:52:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/5733
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5733/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5733/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5733
[]
false
2023-04-11T11:11:45Z
2023-04-11T11:04:51Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006240 / 0.011353 (-0.005113) | 0.004392 / 0.011008 (-0.006616) | 0.097276 / 0.038508 (0.058768) | 0.027262 / 0.023109 (0.004153) | 0.303203 / 0.275898 (0.027305) | 0.331878 / 0.323480 (0.008398) | 0.004706 / 0.007986 (-0.003279) | 0.004428 / 0.004328 (0.000100) | 0.074666 / 0.004250 (0.070416) | 0.036154 / 0.037052 (-0.000899) | 0.302997 / 0.258489 (0.044508) | 0.340350 / 0.293841 (0.046509) | 0.031011 / 0.128546 (-0.097535) | 0.011616 / 0.075646 (-0.064031) | 0.323671 / 0.419271 (-0.095601) | 0.042062 / 0.043533 (-0.001471) | 0.311381 / 0.255139 (0.056242) | 0.324697 / 0.283200 (0.041498) | 0.084248 / 0.141683 (-0.057435) | 1.471651 / 1.452155 (0.019496) | 1.533414 / 1.492716 (0.040697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193555 / 0.018006 (0.175549) | 0.393452 / 0.000490 (0.392962) | 0.002348 / 0.000200 (0.002148) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022523 / 0.037411 (-0.014889) | 0.096552 / 0.014526 (0.082026) | 0.101746 / 0.176557 (-0.074810) | 0.163145 / 0.737135 (-0.573990) | 0.106417 / 0.296338 (-0.189921) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448589 / 0.215209 (0.233380) | 4.467803 / 2.077655 (2.390148) | 2.178745 / 1.504120 (0.674625) | 1.983339 / 1.541195 (0.442145) | 2.056554 / 1.468490 (0.588064) | 0.697571 / 4.584777 (-3.887206) | 3.363967 / 3.745712 (-0.381745) | 1.872526 / 5.269862 (-3.397336) | 1.258245 / 4.565676 (-3.307432) | 0.082954 / 0.424275 (-0.341321) | 0.012306 / 0.007607 (0.004699) | 0.545096 / 0.226044 (0.319052) | 5.468706 / 2.268929 (3.199777) | 2.645333 / 55.444624 (-52.799292) | 2.287659 / 6.876477 (-4.588818) | 2.346768 / 2.142072 (0.204696) | 0.803730 / 4.805227 (-4.001497) | 0.151037 / 6.500664 (-6.349627) | 0.066404 / 0.075469 (-0.009065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192982 / 1.841788 (-0.648806) | 13.631225 / 8.074308 (5.556917) | 13.830053 / 10.191392 (3.638661) | 0.141901 / 0.680424 (-0.538523) | 0.016500 / 0.534201 (-0.517701) | 0.373268 / 0.579283 (-0.206015) | 0.380123 / 0.434364 (-0.054241) | 0.430786 / 0.540337 (-0.109551) | 0.512669 / 1.386936 (-0.874267) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006161 / 0.011353 (-0.005192) | 0.004399 / 0.011008 (-0.006609) | 0.076210 / 0.038508 (0.037702) | 0.026791 / 0.023109 (0.003681) | 0.341523 / 0.275898 (0.065625) | 0.370400 / 0.323480 (0.046920) | 0.004495 / 0.007986 (-0.003491) | 0.003204 / 0.004328 (-0.001125) | 0.075444 / 0.004250 (0.071194) | 0.035914 / 0.037052 (-0.001138) | 0.343806 / 0.258489 (0.085317) | 0.384320 / 0.293841 (0.090479) | 0.031438 / 0.128546 (-0.097109) | 0.011253 / 0.075646 (-0.064393) | 0.085364 / 0.419271 (-0.333908) | 0.041407 / 0.043533 (-0.002126) | 0.338831 / 0.255139 (0.083692) | 0.364357 / 0.283200 (0.081158) | 0.087417 / 0.141683 (-0.054266) | 1.520624 / 1.452155 (0.068470) | 1.572432 / 1.492716 (0.079716) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232403 / 0.018006 (0.214396) | 0.388187 / 0.000490 (0.387698) | 0.001158 / 0.000200 (0.000958) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024596 / 0.037411 (-0.012816) | 0.101203 / 0.014526 (0.086677) | 0.105243 / 0.176557 (-0.071314) | 0.158215 / 0.737135 (-0.578920) | 0.110277 / 0.296338 (-0.186061) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435661 / 0.215209 (0.220452) | 4.350151 / 2.077655 (2.272496) | 2.072372 / 1.504120 (0.568252) | 1.870675 / 1.541195 (0.329480) | 1.910883 / 1.468490 (0.442393) | 0.697384 / 4.584777 (-3.887393) | 3.399377 / 3.745712 (-0.346335) | 2.685008 / 5.269862 (-2.584854) | 1.476843 / 4.565676 (-3.088834) | 0.083177 / 0.424275 (-0.341098) | 0.012413 / 0.007607 (0.004806) | 0.542543 / 0.226044 (0.316498) | 5.431422 / 2.268929 (3.162494) | 2.506419 / 55.444624 (-52.938206) | 2.166342 / 6.876477 (-4.710135) | 2.164421 / 2.142072 (0.022348) | 0.800609 / 4.805227 (-4.004618) | 0.150527 / 6.500664 (-6.350137) | 0.065780 / 0.075469 (-0.009689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293409 / 1.841788 (-0.548379) | 13.814898 / 8.074308 (5.740590) | 13.940416 / 10.191392 (3.749024) | 0.149377 / 0.680424 (-0.531047) | 0.016462 / 0.534201 (-0.517739) | 0.393748 / 0.579283 (-0.185535) | 0.384327 / 0.434364 (-0.050037) | 0.489900 / 0.540337 (-0.050437) | 0.574608 / 1.386936 (-0.812328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2607935c4e45c70c44fcb698db0363ca7ba83d4 \"CML watermark\")\n" ]
null
[]
Unpin fsspec
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5733/timeline
In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See: - https://github.com/fsspec/filesystem_spec/pull/1237 This PR recovers previous behavior by passing clobber True when registering mock implementations. This PR also removes the temporary pin introduced by: - #5731 Fix #5734.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5733.diff", "html_url": "https://github.com/huggingface/datasets/pull/5733", "merged_at": "2023-04-11T11:04:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/5733.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5733" }
1,662,039,191
https://api.github.com/repos/huggingface/datasets/issues/5733/comments
PR_kwDODunzps5OAA04
null
5,733
https://api.github.com/repos/huggingface/datasets/issues/5733/events
true
closed
2023-04-11T08:38:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/5732
{ "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucaslingle", "id": 10287371, "login": "lucaslingle", "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "repos_url": "https://api.github.com/users/lucaslingle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "type": "User", "url": "https://api.github.com/users/lucaslingle" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5732/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5732/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucaslingle", "id": 10287371, "login": "lucaslingle", "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "repos_url": "https://api.github.com/users/lucaslingle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "type": "User", "url": "https://api.github.com/users/lucaslingle" }
https://github.com/huggingface/datasets/issues/5732
[ { "avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4", "events_url": "https://api.github.com/users/lucaslingle/events{/privacy}", "followers_url": "https://api.github.com/users/lucaslingle/followers", "following_url": "https://api.github.com/users/lucaslingle/following{/other_user}", "gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucaslingle", "id": 10287371, "login": "lucaslingle", "node_id": "MDQ6VXNlcjEwMjg3Mzcx", "organizations_url": "https://api.github.com/users/lucaslingle/orgs", "received_events_url": "https://api.github.com/users/lucaslingle/received_events", "repos_url": "https://api.github.com/users/lucaslingle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions", "type": "User", "url": "https://api.github.com/users/lucaslingle" } ]
false
2023-04-11T09:28:17Z
2023-04-11T09:28:16Z
null
[ "#self-assign", "The Enwik8 pipeline is not present in this codebase, and is hosted elsewhere. I have opened a PR [there](https://huggingface.co/datasets/enwik8/discussions/4) instead. " ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Enwik8 should support the standard split
NONE
https://api.github.com/repos/huggingface/datasets/issues/5732/timeline
### Feature request The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train". The HuggingFace Datasets library should include a BuilderConfig for Enwik8 with train, validation, and test sets derived from the first 90 million bytes, next 5 million bytes, and last 5 million bytes, respectively. This Enwik8 split is standard practice in LM papers, as elaborated and motivated below. ### Motivation Enwik8 is commonly split into 90M, 5M, 5M consecutive bytes. This is done in the Transformer-XL [codebase](https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/getdata.sh#L34), and is additionally mentioned in the Sparse Transformers [paper](https://arxiv.org/abs/1904.10509) and the Compressive Transformers [paper](https://arxiv.org/abs/1911.05507). This split is pretty much universal among language modeling papers. One may obtain the splits by manual wrangling, using the data yielded by the ```enwik8-raw``` BuilderConfig. However, this undermines the seamless functionality of the library: one must slice the single raw example, extract it into three tensors, and wrap each in a separate dataset. This becomes even more of a nuisance if using the current Enwik8 HuggingFace dataset as a TfdsDataSource with [SeqIO](https://github.com/google/seqio), where a pipeline of preprocessors is typically included in a SeqIO Task definition, to be applied immediately after loading the data with TFDS. ### Your contribution Supporting this functionality in HuggingFace Datasets will only require an additional BuilderConfig for Enwik8 and a few additional lines of code. I will submit a PR.
https://api.github.com/repos/huggingface/datasets
null
1,662,020,571
https://api.github.com/repos/huggingface/datasets/issues/5732/comments
I_kwDODunzps5jEGvb
null
5,732
https://api.github.com/repos/huggingface/datasets/issues/5732/events
false
closed
2023-04-11T08:33:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/5731
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5731
[]
false
2023-04-11T08:57:45Z
2023-04-11T08:47:55Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009735 / 0.011353 (-0.001618) | 0.010410 / 0.011008 (-0.000598) | 0.134986 / 0.038508 (0.096478) | 0.038392 / 0.023109 (0.015283) | 0.414451 / 0.275898 (0.138553) | 0.447775 / 0.323480 (0.124295) | 0.007223 / 0.007986 (-0.000763) | 0.006373 / 0.004328 (0.002045) | 0.102631 / 0.004250 (0.098381) | 0.048516 / 0.037052 (0.011464) | 0.410179 / 0.258489 (0.151690) | 0.467773 / 0.293841 (0.173932) | 0.053163 / 0.128546 (-0.075384) | 0.019801 / 0.075646 (-0.055845) | 0.452708 / 0.419271 (0.033436) | 0.068691 / 0.043533 (0.025159) | 0.405482 / 0.255139 (0.150343) | 0.457669 / 0.283200 (0.174470) | 0.113464 / 0.141683 (-0.028219) | 1.918143 / 1.452155 (0.465988) | 2.033123 / 1.492716 (0.540407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274564 / 0.018006 (0.256557) | 0.608855 / 0.000490 (0.608366) | 0.006266 / 0.000200 (0.006066) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033704 / 0.037411 (-0.003708) | 0.130982 / 0.014526 (0.116456) | 0.143862 / 0.176557 (-0.032694) | 0.212622 / 0.737135 (-0.524513) | 0.148899 / 0.296338 (-0.147439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.670968 / 0.215209 (0.455759) | 6.602911 / 2.077655 (4.525256) | 2.644290 / 1.504120 (1.140171) | 2.268593 / 1.541195 (0.727399) | 2.325393 / 1.468490 (0.856903) | 1.388156 / 4.584777 (-3.196621) | 5.958569 / 3.745712 (2.212857) | 3.310756 / 5.269862 (-1.959106) | 2.390953 / 4.565676 (-2.174724) | 0.147416 / 0.424275 (-0.276859) | 0.015201 / 0.007607 (0.007594) | 0.794109 / 0.226044 (0.568064) | 7.984855 / 2.268929 (5.715926) | 3.382275 / 55.444624 (-52.062349) | 2.676102 / 6.876477 (-4.200375) | 2.846743 / 2.142072 (0.704671) | 1.467523 / 4.805227 (-3.337704) | 0.283184 / 6.500664 (-6.217480) | 0.088655 / 0.075469 (0.013186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632765 / 1.841788 (-0.209022) | 19.102473 / 8.074308 (11.028165) | 25.632535 / 10.191392 (15.441143) | 0.255628 / 0.680424 (-0.424795) | 0.034655 / 0.534201 (-0.499546) | 0.564593 / 0.579283 (-0.014690) | 0.668339 / 0.434364 (0.233975) | 0.648414 / 0.540337 (0.108076) | 0.766735 / 1.386936 (-0.620201) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009658 / 0.011353 (-0.001695) | 0.006690 / 0.011008 (-0.004318) | 0.099151 / 0.038508 (0.060643) | 0.037092 / 0.023109 (0.013983) | 0.470354 / 0.275898 (0.194456) | 0.525863 / 0.323480 (0.202383) | 0.007593 / 0.007986 (-0.000393) | 0.006637 / 0.004328 (0.002308) | 0.098782 / 0.004250 (0.094532) | 0.058524 / 0.037052 (0.021471) | 0.502569 / 0.258489 (0.244080) | 0.526410 / 0.293841 (0.232569) | 0.059486 / 0.128546 (-0.069060) | 0.019742 / 0.075646 (-0.055904) | 0.119715 / 0.419271 (-0.299556) | 0.065269 / 0.043533 (0.021736) | 0.483327 / 0.255139 (0.228188) | 0.506148 / 0.283200 (0.222948) | 0.123178 / 0.141683 (-0.018505) | 1.916624 / 1.452155 (0.464470) | 2.051410 / 1.492716 (0.558694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286481 / 0.018006 (0.268475) | 0.597300 / 0.000490 (0.596810) | 0.008906 / 0.000200 (0.008706) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031406 / 0.037411 (-0.006005) | 0.146748 / 0.014526 (0.132222) | 0.152898 / 0.176557 (-0.023658) | 0.212535 / 0.737135 (-0.524600) | 0.155577 / 0.296338 (-0.140761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.660989 / 0.215209 (0.445780) | 6.688530 / 2.077655 (4.610875) | 3.039278 / 1.504120 (1.535159) | 2.660357 / 1.541195 (1.119162) | 2.696912 / 1.468490 (1.228422) | 1.259760 / 4.584777 (-3.325017) | 5.922452 / 3.745712 (2.176740) | 5.304200 / 5.269862 (0.034338) | 2.823928 / 4.565676 (-1.741748) | 0.148118 / 0.424275 (-0.276157) | 0.015575 / 0.007607 (0.007968) | 0.794404 / 0.226044 (0.568360) | 8.233651 / 2.268929 (5.964722) | 3.777482 / 55.444624 (-51.667142) | 3.064924 / 6.876477 (-3.811552) | 3.117803 / 2.142072 (0.975731) | 1.479559 / 4.805227 (-3.325668) | 0.254070 / 6.500664 (-6.246594) | 0.086806 / 0.075469 (0.011337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.735515 / 1.841788 (-0.106273) | 18.934157 / 8.074308 (10.859848) | 22.645248 / 10.191392 (12.453856) | 0.227073 / 0.680424 (-0.453351) | 0.030650 / 0.534201 (-0.503551) | 0.594619 / 0.579283 (0.015336) | 0.653304 / 0.434364 (0.218940) | 0.707484 / 0.540337 (0.167147) | 0.823327 / 1.386936 (-0.563610) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273392966e434286f4f5ba2ad596730bff11056d \"CML watermark\")\n" ]
null
[]
Temporarily pin fsspec
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5731/timeline
Fix #5730.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5731.diff", "html_url": "https://github.com/huggingface/datasets/pull/5731", "merged_at": "2023-04-11T08:47:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/5731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5731" }
1,662,012,913
https://api.github.com/repos/huggingface/datasets/issues/5731/comments
PR_kwDODunzps5N_7Un
null
5,731
https://api.github.com/repos/huggingface/datasets/issues/5731/events
true
closed
2023-04-11T08:29:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/5730
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5730/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5730/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5730
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-11T08:47:56Z
2023-04-11T08:47:56Z
null
[]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5730/timeline
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_file_utils.py::test_get_from_cache_fsspec - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_filesystem.py::test_is_remote_filesystem - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[tmp_path-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level/second_level/date=2019-10-01-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path/file.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://top_level-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://dir_that_doesnt_exist-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[tmp_path/file.txt-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://-0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://top_level/second_level/date=2019-10-01/a.parquet-100] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[tmp_path/*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[tmp_path-expected_outputs0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::test_xwalk[mock://top_level/second_level-expected_outputs1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]/*-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False ===== 2105 passed, 18 skipped, 38 warnings, 46 errors in 236.22s (0:03:56) ===== ```
https://api.github.com/repos/huggingface/datasets
null
1,662,007,926
https://api.github.com/repos/huggingface/datasets/issues/5730/comments
I_kwDODunzps5jEDp2
null
5,730
https://api.github.com/repos/huggingface/datasets/issues/5730/events
false
closed
2023-04-11T07:34:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/5729
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5729/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5729/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5729
[]
false
2023-04-26T15:12:25Z
2023-04-26T15:05:12Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006954 / 0.011353 (-0.004399) | 0.004947 / 0.011008 (-0.006061) | 0.086564 / 0.038508 (0.048056) | 0.031167 / 0.023109 (0.008058) | 0.262285 / 0.275898 (-0.013613) | 0.295753 / 0.323480 (-0.027727) | 0.005389 / 0.007986 (-0.002596) | 0.004130 / 0.004328 (-0.000198) | 0.065127 / 0.004250 (0.060877) | 0.042511 / 0.037052 (0.005458) | 0.263497 / 0.258489 (0.005008) | 0.307456 / 0.293841 (0.013615) | 0.031338 / 0.128546 (-0.097209) | 0.011023 / 0.075646 (-0.064623) | 0.295625 / 0.419271 (-0.123647) | 0.045813 / 0.043533 (0.002280) | 0.259369 / 0.255139 (0.004230) | 0.279325 / 0.283200 (-0.003875) | 0.099748 / 0.141683 (-0.041934) | 1.252572 / 1.452155 (-0.199583) | 1.347069 / 1.492716 (-0.145647) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249726 / 0.018006 (0.231720) | 0.556882 / 0.000490 (0.556392) | 0.008237 / 0.000200 (0.008037) | 0.000294 / 0.000054 (0.000239) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026879 / 0.037411 (-0.010533) | 0.105141 / 0.014526 (0.090615) | 0.115473 / 0.176557 (-0.061084) | 0.172989 / 0.737135 (-0.564147) | 0.120433 / 0.296338 (-0.175906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400022 / 0.215209 (0.184812) | 3.965402 / 2.077655 (1.887747) | 1.805257 / 1.504120 (0.301138) | 1.610136 / 1.541195 (0.068941) | 1.661162 / 1.468490 (0.192672) | 0.695311 / 4.584777 (-3.889466) | 3.753757 / 3.745712 (0.008045) | 2.060609 / 5.269862 (-3.209253) | 1.333251 / 4.565676 (-3.232426) | 0.085790 / 0.424275 (-0.338485) | 0.012256 / 0.007607 (0.004649) | 0.502133 / 0.226044 (0.276088) | 5.040979 / 2.268929 (2.772051) | 2.310919 / 55.444624 (-53.133705) | 2.010534 / 6.876477 (-4.865943) | 2.132961 / 2.142072 (-0.009111) | 0.837636 / 4.805227 (-3.967592) | 0.169838 / 6.500664 (-6.330826) | 0.065003 / 0.075469 (-0.010466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218674 / 1.841788 (-0.623114) | 14.696076 / 8.074308 (6.621768) | 14.559492 / 10.191392 (4.368100) | 0.167761 / 0.680424 (-0.512663) | 0.017747 / 0.534201 (-0.516454) | 0.421624 / 0.579283 (-0.157659) | 0.414086 / 0.434364 (-0.020278) | 0.501398 / 0.540337 (-0.038940) | 0.596099 / 1.386936 (-0.790837) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004123) | 0.005345 / 0.011008 (-0.005664) | 0.073739 / 0.038508 (0.035231) | 0.033440 / 0.023109 (0.010330) | 0.339790 / 0.275898 (0.063892) | 0.367857 / 0.323480 (0.044377) | 0.005927 / 0.007986 (-0.002058) | 0.004279 / 0.004328 (-0.000049) | 0.074247 / 0.004250 (0.069996) | 0.048971 / 0.037052 (0.011918) | 0.340235 / 0.258489 (0.081746) | 0.380521 / 0.293841 (0.086680) | 0.035322 / 0.128546 (-0.093225) | 0.012416 / 0.075646 (-0.063230) | 0.086060 / 0.419271 (-0.333212) | 0.049331 / 0.043533 (0.005799) | 0.342871 / 0.255139 (0.087732) | 0.355673 / 0.283200 (0.072473) | 0.111976 / 0.141683 (-0.029707) | 1.462530 / 1.452155 (0.010375) | 1.550336 / 1.492716 (0.057620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266560 / 0.018006 (0.248554) | 0.550886 / 0.000490 (0.550396) | 0.001069 / 0.000200 (0.000869) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028701 / 0.037411 (-0.008711) | 0.110535 / 0.014526 (0.096010) | 0.122846 / 0.176557 (-0.053711) | 0.176395 / 0.737135 (-0.560740) | 0.128653 / 0.296338 (-0.167685) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431693 / 0.215209 (0.216484) | 4.283691 / 2.077655 (2.206036) | 2.013967 / 1.504120 (0.509847) | 1.823914 / 1.541195 (0.282719) | 1.872055 / 1.468490 (0.403565) | 0.703318 / 4.584777 (-3.881459) | 3.783412 / 3.745712 (0.037699) | 2.950147 / 5.269862 (-2.319715) | 1.826159 / 4.565676 (-2.739518) | 0.086897 / 0.424275 (-0.337379) | 0.012512 / 0.007607 (0.004905) | 0.526730 / 0.226044 (0.300685) | 5.263871 / 2.268929 (2.994943) | 2.552163 / 55.444624 (-52.892462) | 2.276216 / 6.876477 (-4.600261) | 2.419934 / 2.142072 (0.277862) | 0.848235 / 4.805227 (-3.956993) | 0.170405 / 6.500664 (-6.330259) | 0.064979 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276780 / 1.841788 (-0.565008) | 15.100829 / 8.074308 (7.026521) | 15.117531 / 10.191392 (4.926139) | 0.147129 / 0.680424 (-0.533295) | 0.017806 / 0.534201 (-0.516395) | 0.422975 / 0.579283 (-0.156308) | 0.430286 / 0.434364 (-0.004078) | 0.501405 / 0.540337 (-0.038932) | 0.596810 / 1.386936 (-0.790126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f6ee2e6603fe81638256d37a6aa7ad0400e31a83 \"CML watermark\")\n" ]
null
[]
Fix nondeterministic sharded data split order
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5729/timeline
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements. Fix #5728.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5729.diff", "html_url": "https://github.com/huggingface/datasets/pull/5729", "merged_at": "2023-04-26T15:05:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/5729.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5729" }
1,661,929,923
https://api.github.com/repos/huggingface/datasets/issues/5729/comments
PR_kwDODunzps5N_pvI
null
5,729
https://api.github.com/repos/huggingface/datasets/issues/5729/events
true
closed
2023-04-11T07:31:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/5728
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5728/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5728/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5728
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-26T15:05:13Z
2023-04-26T15:05:13Z
null
[]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
The order of data split names is nondeterministic
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5728/timeline
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718 ``` FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random'] At index 0 diff: 'random' != 'train' Full diff: - ['train', 'random'] + ['random', 'train'] ``` I have checked locally and found out that the data split order is nondeterministic. This is caused by the use of `set` for sharded splits.
https://api.github.com/repos/huggingface/datasets
null
1,661,925,932
https://api.github.com/repos/huggingface/datasets/issues/5728/comments
I_kwDODunzps5jDvos
null
5,728
https://api.github.com/repos/huggingface/datasets/issues/5728/events
false
closed
2023-04-10T23:21:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/5727
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5727/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5727/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/122648572?v=4", "events_url": "https://api.github.com/users/joelkowalewski/events{/privacy}", "followers_url": "https://api.github.com/users/joelkowalewski/followers", "following_url": "https://api.github.com/users/joelkowalewski/following{/other_user}", "gists_url": "https://api.github.com/users/joelkowalewski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/joelkowalewski", "id": 122648572, "login": "joelkowalewski", "node_id": "U_kgDOB093_A", "organizations_url": "https://api.github.com/users/joelkowalewski/orgs", "received_events_url": "https://api.github.com/users/joelkowalewski/received_events", "repos_url": "https://api.github.com/users/joelkowalewski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/joelkowalewski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joelkowalewski/subscriptions", "type": "User", "url": "https://api.github.com/users/joelkowalewski" }
https://github.com/huggingface/datasets/issues/5727
[]
false
2023-07-21T14:08:20Z
2023-07-21T14:08:19Z
null
[ "Hi! Can you please paste the entire error stack trace, not only the last few lines?", "`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1762 verification_mode = VerificationMode(\r\n 1763 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS\r\n 1764 )\r\n 1766 # Create a dataset builder\r\n-> 1767 builder_instance = load_dataset_builder(\r\n 1768 path=path,\r\n 1769 name=name,\r\n 1770 data_dir=data_dir,\r\n 1771 data_files=data_files,\r\n 1772 cache_dir=cache_dir,\r\n 1773 features=features,\r\n 1774 download_config=download_config,\r\n 1775 download_mode=download_mode,\r\n 1776 revision=revision,\r\n 1777 use_auth_token=use_auth_token,\r\n 1778 storage_options=storage_options,\r\n 1779 **config_kwargs,\r\n 1780 )\r\n 1782 # Return iterable dataset in case of streaming\r\n 1783 if streaming:\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1498, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, storage_options, **config_kwargs)\r\n 1496 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1497 download_config.use_auth_token = use_auth_token\r\n-> 1498 dataset_module = dataset_module_factory(\r\n 1499 path,\r\n 1500 revision=revision,\r\n 1501 download_config=download_config,\r\n 1502 download_mode=download_mode,\r\n 1503 data_dir=data_dir,\r\n 1504 data_files=data_files,\r\n 1505 )\r\n 1507 # Get dataset builder class from the processing script\r\n 1508 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1211, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1209 raise e1 from None\r\n 1210 if isinstance(e1, FileNotFoundError):\r\n-> 1211 raise FileNotFoundError(\r\n 1212 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1213 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1214 ) from None\r\n 1215 raise e1 from None\r\n 1216 else:`", "Okay, this is the issue:\r\n```\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: \r\n'C:\\\\Users\\\\...\\\\.cache\\\\huggingface'\r\n``` \r\n\r\nI don't remember seeing this error before.\r\n\r\nI guess it could happen in a multi-process environment if one of the processes deletes the `datasets` cache as the other one is loading a dataset (with `load_dataset`), so make sure that's not the case. Also, you can disable the Windows max path length limit (if enabled), but this is most likely not the problem.", "Closing due to inactivity." ]
completed
[]
load_dataset fails with FileNotFound error on Windows
NONE
https://api.github.com/repos/huggingface/datasets/issues/5727/timeline
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-forge datasets` Then ``` from datasets import load_dataset # this or any other example from the website fails with the FileNotFoundError glue = load_dataset("glue", "ax") ``` **Below I have pasted the error omitting the full path**: ``` raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\...\\.cache\\huggingface' ``` ### Steps to reproduce the bug On Windows 10 1) create a minimal conda environment (with just Python) (2) activate environment (3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets` (4) import load_dataset and follow example usage from any dataset card. ### Expected behavior The expected behavior is to load the file into the Python session running on my machine without error. ### Environment info ``` # Name Version Build Channel aiohttp 3.8.4 py311ha68e1ae_0 conda-forge aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge attrs 22.2.0 pyh71513ae_0 conda-forge aws-c-auth 0.6.26 h1262f0c_1 conda-forge aws-c-cal 0.5.21 h7cda486_2 conda-forge aws-c-common 0.8.14 hcfcfb64_0 conda-forge aws-c-compression 0.2.16 h8a79959_5 conda-forge aws-c-event-stream 0.2.20 h5f78564_4 conda-forge aws-c-http 0.7.6 h2545be9_0 conda-forge aws-c-io 0.13.19 h0d2781e_3 conda-forge aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge aws-c-s3 0.2.7 h8113e7b_1 conda-forge aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge aws-checksums 0.1.14 h8a79959_5 conda-forge aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge bzip2 1.0.8 h8ffe710_4 conda-forge c-ares 1.19.0 h2bbff1b_0 ca-certificates 2023.01.10 haa95532_0 certifi 2022.12.7 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311h7d9ee11_3 conda-forge charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge colorama 0.4.6 pyhd8ed1ab_0 conda-forge cryptography 40.0.1 py311h28e9c30_0 conda-forge dataclasses 0.8 pyhc8e2a94_3 conda-forge datasets 2.11.0 py_0 huggingface dill 0.3.6 pyhd8ed1ab_1 conda-forge filelock 3.11.0 pyhd8ed1ab_0 conda-forge frozenlist 1.3.3 py311ha68e1ae_0 conda-forge fsspec 2023.4.0 pyh1a96a4e_0 conda-forge gflags 2.2.2 ha925a31_1004 conda-forge glog 0.6.0 h4797de2_0 conda-forge huggingface_hub 0.13.4 py_0 huggingface idna 3.4 pyhd8ed1ab_0 conda-forge importlib-metadata 6.3.0 pyha770c72_0 conda-forge importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge intel-openmp 2023.0.0 h57928b3_25922 conda-forge krb5 1.20.1 heb0366b_0 conda-forge libabseil 20230125.0 cxx17_h63175ca_1 conda-forge libarrow 11.0.0 h04c43f8_13_cpu conda-forge libblas 3.9.0 16_win64_mkl conda-forge libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge libbrotlidec 1.0.9 hcfcfb64_8 conda-forge libbrotlienc 1.0.9 hcfcfb64_8 conda-forge libcblas 3.9.0 16_win64_mkl conda-forge libcrc32c 1.1.2 h0e60522_0 conda-forge libcurl 7.88.1 h68f0423_1 conda-forge libexpat 2.5.0 h63175ca_1 conda-forge libffi 3.4.2 h8ffe710_5 conda-forge libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge libgrpc 1.52.1 h32da247_1 conda-forge libhwloc 2.9.0 h51c2c0f_0 conda-forge libiconv 1.17 h8ffe710_0 conda-forge liblapack 3.9.0 16_win64_mkl conda-forge libprotobuf 3.21.12 h12be248_0 conda-forge libsqlite 3.40.0 hcfcfb64_0 conda-forge libssh2 1.10.0 h9a1e1f7_3 conda-forge libthrift 0.18.1 h9ce19ad_0 conda-forge libutf8proc 2.8.0 h82a8f57_0 conda-forge libxml2 2.10.3 hc3477c8_6 conda-forge libzlib 1.2.13 hcfcfb64_4 conda-forge lz4-c 1.9.4 hcfcfb64_0 conda-forge mkl 2022.1.0 h6a75c08_874 conda-forge multidict 6.0.4 py311ha68e1ae_0 conda-forge multiprocess 0.70.14 py311ha68e1ae_3 conda-forge numpy 1.24.2 py311h0b4df5a_0 conda-forge openssl 3.1.0 hcfcfb64_0 conda-forge orc 1.8.3 hada7b9e_0 conda-forge packaging 23.0 pyhd8ed1ab_0 conda-forge pandas 2.0.0 py311hf63dbb6_0 conda-forge parquet-cpp 1.5.1 2 conda-forge pip 23.0.1 pyhd8ed1ab_0 conda-forge pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 pyh0701188_6 conda-forge python 3.11.3 h2628c8c_0_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge python_abi 3.11 3_cp311 conda-forge pytz 2023.3 pyhd8ed1ab_0 conda-forge pyyaml 6.0 py311ha68e1ae_5 conda-forge re2 2023.02.02 h63175ca_0 conda-forge requests 2.28.2 pyhd8ed1ab_1 conda-forge setuptools 67.6.1 pyhd8ed1ab_0 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.10 hfb803bf_0 conda-forge tbb 2021.8.0 h91493d7_0 conda-forge tk 8.6.12 h8ffe710_0 conda-forge tqdm 4.65.0 pyhd8ed1ab_1 conda-forge typing-extensions 4.5.0 hd8ed1ab_0 conda-forge typing_extensions 4.5.0 pyha770c72_0 conda-forge tzdata 2023c h71feb2d_0 conda-forge ucrt 10.0.22621.0 h57928b3_0 conda-forge urllib3 1.26.15 pyhd8ed1ab_0 conda-forge vc 14.3 hb6edc58_10 conda-forge vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge wheel 0.40.0 pyhd8ed1ab_0 conda-forge win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge xxhash 0.8.1 hcfcfb64_0 conda-forge xz 5.2.10 h8cc25b3_1 yaml 0.2.5 h8ffe710_2 conda-forge yarl 1.8.2 py311ha68e1ae_0 conda-forge zipp 3.15.0 pyhd8ed1ab_0 conda-forge zlib 1.2.13 hcfcfb64_4 conda-forge zstd 1.5.4 hd43e919_0 ```
https://api.github.com/repos/huggingface/datasets
null
1,661,536,363
https://api.github.com/repos/huggingface/datasets/issues/5727/comments
I_kwDODunzps5jCQhr
null
5,727
https://api.github.com/repos/huggingface/datasets/issues/5727/events
false
closed
2023-04-10T15:22:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/5726
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4", "events_url": "https://api.github.com/users/myluki2000/events{/privacy}", "followers_url": "https://api.github.com/users/myluki2000/followers", "following_url": "https://api.github.com/users/myluki2000/following{/other_user}", "gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/myluki2000", "id": 3610788, "login": "myluki2000", "node_id": "MDQ6VXNlcjM2MTA3ODg=", "organizations_url": "https://api.github.com/users/myluki2000/orgs", "received_events_url": "https://api.github.com/users/myluki2000/received_events", "repos_url": "https://api.github.com/users/myluki2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions", "type": "User", "url": "https://api.github.com/users/myluki2000" }
https://github.com/huggingface/datasets/issues/5726
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-21T06:35:28Z
2023-04-21T06:35:28Z
null
[ "Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix." ]
completed
[]
Fallback JSON Dataset loading does not load all values when features specified manually
NONE
https://api.github.com/repos/huggingface/datasets/issues/5726/timeline
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior? To fix this you'd have to change this line: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140 To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method. ### Steps to reproduce the bug Consider a dataset JSON like this: ``` [ { "instruction": "Do stuff", "output": "Answer stuff" }, { "instruction": "Do stuff2", "input": "Additional Input2", "output": "Answer stuff2" } ] ``` Using this code to load the dataset: ``` from datasets import load_dataset, Features, Value features = { "instruction": Value("string"), "input": Value("string"), "output": Value("string") } features = Features(features) ds = load_dataset("json", data_files="./ds.json", features=features) for row in ds["train"]: print(row) ``` we get a dataset that looks like this: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | None | "Answer Stuff2" | ### Expected behavior The input column should contain values other than None for dataset entries that have the "input" attribute set: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | "Additional Input2" | "Answer Stuff2" | ### Environment info Python 3.10.10 Datasets 2.11.0 Windows 10
https://api.github.com/repos/huggingface/datasets
null
1,660,944,807
https://api.github.com/repos/huggingface/datasets/issues/5726/comments
I_kwDODunzps5jAAGn
null
5,726
https://api.github.com/repos/huggingface/datasets/issues/5726/events
false
closed
2023-04-10T08:41:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/5725
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5725/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5725/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/845175?v=4", "events_url": "https://api.github.com/users/ndvbd/events{/privacy}", "followers_url": "https://api.github.com/users/ndvbd/followers", "following_url": "https://api.github.com/users/ndvbd/following{/other_user}", "gists_url": "https://api.github.com/users/ndvbd/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ndvbd", "id": 845175, "login": "ndvbd", "node_id": "MDQ6VXNlcjg0NTE3NQ==", "organizations_url": "https://api.github.com/users/ndvbd/orgs", "received_events_url": "https://api.github.com/users/ndvbd/received_events", "repos_url": "https://api.github.com/users/ndvbd/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ndvbd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ndvbd/subscriptions", "type": "User", "url": "https://api.github.com/users/ndvbd" }
https://github.com/huggingface/datasets/issues/5725
[]
false
2023-04-21T06:16:24Z
2023-04-21T06:16:24Z
null
[ "Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```", "@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`", "I misread the format in which the dataset is stored - the `nrows` parameter works for CSV, but not JSON.\r\n\r\nThis means the only option is first to create a DataFrame and then convert it to a Dataset object:\r\n```python\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndf = pd.read_json(data_path, lines=True, nrows=10)\r\nds = Dataset.from_pandas(df)\r\n```" ]
completed
[]
How to limit the number of examples in dataset, for testing?
NONE
https://api.github.com/repos/huggingface/datasets/issues/5725/timeline
### Describe the bug I am using this command: `data = load_dataset("json", data_files=data_path)` However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter. ### Steps to reproduce the bug In the description. ### Expected behavior To be able to limit the number of examples ### Environment info Nothing special
https://api.github.com/repos/huggingface/datasets
null
1,660,455,202
https://api.github.com/repos/huggingface/datasets/issues/5725/comments
I_kwDODunzps5i-Iki
null
5,725
https://api.github.com/repos/huggingface/datasets/issues/5725/events
false
closed
2023-04-09T16:58:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/5724
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4", "events_url": "https://api.github.com/users/szxiangjn/events{/privacy}", "followers_url": "https://api.github.com/users/szxiangjn/followers", "following_url": "https://api.github.com/users/szxiangjn/following{/other_user}", "gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/szxiangjn", "id": 41177966, "login": "szxiangjn", "node_id": "MDQ6VXNlcjQxMTc3OTY2", "organizations_url": "https://api.github.com/users/szxiangjn/orgs", "received_events_url": "https://api.github.com/users/szxiangjn/received_events", "repos_url": "https://api.github.com/users/szxiangjn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions", "type": "User", "url": "https://api.github.com/users/szxiangjn" }
https://github.com/huggingface/datasets/issues/5724
[]
false
2023-04-20T20:37:30Z
2023-04-20T20:37:30Z
null
[ "Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\r\n\r\nPS: https://github.com/huggingface/datasets/pull/5331, once merged, will allow us to define C4's configs in its README, making downloading it much more user-friendly." ]
completed
[]
Error after shuffling streaming IterableDatasets with downloaded dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5724/timeline
### Describe the bug I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`: ``` File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__ for x in self.ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__ yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper for key, table in generate_tables_fn(**kwargs): File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables batch = f.read(self.config.chunksize) File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries out = read(*args, **kwargs) File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read return self._buffer.read(size) File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read if not self._read_gzip_header(): File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header raise BadGzipFile('Not a gzipped file (%r)' % magic) gzip.BadGzipFile: Not a gzipped file (b've') ``` I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle. ### Steps to reproduce the bug 1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4 2. ``` import datasets dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train') dataset = dataset.shuffle(buffer_size=10_000, seed=42) next(iter(dataset)) ``` ### Expected behavior `next(iter(dataset))` should give me a sample from the dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
https://api.github.com/repos/huggingface/datasets
null
1,659,938,135
https://api.github.com/repos/huggingface/datasets/issues/5724/comments
I_kwDODunzps5i8KVX
null
5,724
https://api.github.com/repos/huggingface/datasets/issues/5724/events
false
closed
2023-04-09T11:04:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/5722
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5722/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5722/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wlhgtc", "id": 16603773, "login": "wlhgtc", "node_id": "MDQ6VXNlcjE2NjAzNzcz", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "repos_url": "https://api.github.com/users/wlhgtc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "type": "User", "url": "https://api.github.com/users/wlhgtc" }
https://github.com/huggingface/datasets/issues/5722
[]
false
2023-07-24T14:50:46Z
2023-07-24T14:50:46Z
null
[ "Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node." ]
completed
[]
Distributed Training Error on Customized Dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/5722/timeline
Hi guys, recently I tried to use `datasets` to train a dual encoder. I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script) Here are my code: ```python class RetrivalDataset(datasets.GeneratorBasedBuilder): """CrossEncoder dataset.""" BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")] # DEFAULT_CONFIG_NAME = "DuReader" def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id": datasets.Value("string"), "question": datasets.Value("string"), "documents": Sequence(datasets.Value("string")), } ), supervised_keys=None, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" train_file = self.config.data_dir + self.config.train_file valid_file = self.config.data_dir + self.config.valid_file logger.info(f"Training on {self.config.train_file}") logger.info(f"Evaluating on {self.config.valid_file}") return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file} ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file} ), ] def _generate_examples(self, file_path): with jsonlines.open(file_path, "r") as f: for record in f: label = record["label"] question = record["question"] # dual encoder all_documents = record["all_documents"] positive_paragraph = all_documents.pop(label) all_documents = [positive_paragraph] + all_documents u_id = "{}_#_{}".format( md5_hash(question + "".join(all_documents)), "".join(random.sample(string.ascii_letters + string.digits, 7)), ) item = { "question": question, "documents": all_documents, "id": u_id, } yield u_id, item ``` It works well on single GPU, but got errors as follows when used DDP: ```python Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED) ``` Here are my train script on a two A100 mechine: ```bash export TORCH_DISTRIBUTED_DEBUG=DETAIL export TORCH_SHOW_CPP_STACKTRACES=1 export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1& ``` I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) . @lhoestq hope you could help me?
https://api.github.com/repos/huggingface/datasets
null
1,659,837,510
https://api.github.com/repos/huggingface/datasets/issues/5722/comments
I_kwDODunzps5i7xxG
null
5,722
https://api.github.com/repos/huggingface/datasets/issues/5722/events
false
open
2023-04-08T23:55:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/5721
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1841186?v=4", "events_url": "https://api.github.com/users/cyrilzakka/events{/privacy}", "followers_url": "https://api.github.com/users/cyrilzakka/followers", "following_url": "https://api.github.com/users/cyrilzakka/following{/other_user}", "gists_url": "https://api.github.com/users/cyrilzakka/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyrilzakka", "id": 1841186, "login": "cyrilzakka", "node_id": "MDQ6VXNlcjE4NDExODY=", "organizations_url": "https://api.github.com/users/cyrilzakka/orgs", "received_events_url": "https://api.github.com/users/cyrilzakka/received_events", "repos_url": "https://api.github.com/users/cyrilzakka/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyrilzakka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyrilzakka/subscriptions", "type": "User", "url": "https://api.github.com/users/cyrilzakka" }
https://github.com/huggingface/datasets/issues/5721
[]
false
2023-04-08T23:55:12Z
null
null
[]
null
[]
Calling datasets.load_dataset("text" ...) results in a wrong split.
NONE
https://api.github.com/repos/huggingface/datasets/issues/5721/timeline
### Describe the bug When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does. ### Steps to reproduce the bug I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL ``` folder_path = "/home/cyril/Downloads/llama_dataset" data = datasets.load_dataset("text", data_dir=folder_path) data.save_to_disk("/home/cyril/Downloads/data.hf") data = datasets.load_from_disk("/home/cyril/Downloads/data.hf") print(data) ``` Results in the following split: ``` DatasetDict({ train: Dataset({ features: ['text'], num_rows: 2114 }) test: Dataset({ features: ['text'], num_rows: 200882 }) validation: Dataset({ features: ['text'], num_rows: 152 }) }) ``` It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split ### Expected behavior Train split should have the bulk of the training examples. ### Environment info datasets 2.11.0, python 3.10.6
https://api.github.com/repos/huggingface/datasets
null
1,659,680,682
https://api.github.com/repos/huggingface/datasets/issues/5721/comments
I_kwDODunzps5i7Leq
null
5,721
https://api.github.com/repos/huggingface/datasets/issues/5721/events
false
open
2023-04-08T18:45:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/5720
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5720/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5720/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29244648?v=4", "events_url": "https://api.github.com/users/jlehrer1/events{/privacy}", "followers_url": "https://api.github.com/users/jlehrer1/followers", "following_url": "https://api.github.com/users/jlehrer1/following{/other_user}", "gists_url": "https://api.github.com/users/jlehrer1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jlehrer1", "id": 29244648, "login": "jlehrer1", "node_id": "MDQ6VXNlcjI5MjQ0NjQ4", "organizations_url": "https://api.github.com/users/jlehrer1/orgs", "received_events_url": "https://api.github.com/users/jlehrer1/received_events", "repos_url": "https://api.github.com/users/jlehrer1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jlehrer1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlehrer1/subscriptions", "type": "User", "url": "https://api.github.com/users/jlehrer1" }
https://github.com/huggingface/datasets/issues/5720
[]
false
2023-05-27T12:57:08Z
null
null
[ "Edit: This behavior is true even without `.take/.set`", "I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n# Saving the dataset as a parquet file\r\ndataset = Dataset.from_generator(my_gen)\r\ntrain_path = \"/tmp/test.parquet\"\r\ndataset.to_parquet(train_path)\r\n\r\n# Creating a local dataset from the parquet file\r\ndata_files = {\"train\": [str(train_path)]}\r\nbuilder = load_dataset_builder(\"parquet\", data_files=data_files)\r\nbuilder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n# Loading the dataset from the local directory as streaming\r\ndataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\ndataset.with_format(\"torch\")\r\n\r\ndl = DataLoader(dataset, batch_size=2, num_workers=1)\r\nfor row in dl:\r\n print(row)\r\n```\r\n\r\nMy env info:\r\n```\r\ndatasets 2.11.0\r\ntorch 2.0.0\r\ntorchvision 0.15.1\r\nPython 3.9.16\r\n```\r\n\r\nNote that the example above doesn't fail if the number of workers used is `0`", "I cannot reproduce this error, not even with your MRE @ivanprado (your env appears to be the same as Colab's, and your code runs there without issues). ", "@mariosasko you are right, it works on Colab. I digged deeper and found that the problem arises when the multiprocessing method is set to be `spawn`. This code reproduces the problem in Colab:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\nimport multiprocessing as mp\r\n\r\nmp.set_start_method('spawn')\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n\r\ndef main():\r\n # Saving the dataset as a parquet file\r\n dataset = Dataset.from_generator(my_gen)\r\n train_path = \"/tmp/test.parquet\"\r\n dataset.to_parquet(train_path)\r\n\r\n # Creating a local dataset from the parquet file\r\n data_files = {\"train\": [str(train_path)]}\r\n builder = load_dataset_builder(\"parquet\", data_files=data_files)\r\n builder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n # Loading the dataset from the local directory as streaming\r\n dataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\n dataset.with_format(\"torch\")\r\n\r\n dl = DataLoader(dataset, batch_size=2, num_workers=1)\r\n for row in dl:\r\n print(row)\r\n\r\nmain()\r\n```", "So is there a way to fix this by changing the `mp` method? This is blocking any usage of the `datasets` library for me", "@jlehrer1 can you try adding `mp.set_start_method('fork')` at the beginning of your code? Maybe this helps you. Keep us posted. ", "I have a similar issue: \r\n> mp.set_start_method('fork')\r\n\r\n\r\nDidnt work" ]
null
[]
Streaming IterableDatasets do not work with torch DataLoaders
NONE
https://api.github.com/repos/huggingface/datasets/issues/5720/timeline
### Describe the bug When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader: ``` File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__ self._iterator = self._get_iterator() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__ w.start() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper' ``` To reproduce, run the code ``` from datasets import load_dataset data = load_dataset(args.dataset_name, split="train", streaming=True) train_len = 5000 val_len = 100 train, val = data.take(train_len), data.skip(train_len).take(val_len) traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text") traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True) ``` Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via ``` from torch.utils.data import Dataset, IterableDataset from torchvision.transforms import Compose, Resize, ToTensor from transformers import AutoTokenizer import requests from PIL import Image class IterableClipDataset(IterableDataset): def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"): self.dataset = dataset self.context_length = context_length self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer self.image_key = image_key self.text_key = text_key def read_image(self, url: str): try: # Try to read the image image = Image.open(requests.get(url, stream=True).raw) except: image = Image.new("RGB", (224, 224), (0, 0, 0)) return image def process_sample(self, image, text): if isinstance(image, str): image = self.read_image(image) if self.image_transform is not None: image = self.image_transform(image) text = self.tokenizer.encode( text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length" ) text = torch.tensor(text, dtype=torch.long) return image, text def __iter__(self): for sample in self.dataset: image, text = sample[self.image_key], sample[self.text_key] yield self.process_sample(image, text) ``` ### Steps to reproduce the bug Steps to reproduce 1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly) 2. Run the code above ### Expected behavior Batched data is produced from the dataloader ### Environment info ``` datasets == 2.9.0 python == 3.9.12 torch == 1.11.0 ```
https://api.github.com/repos/huggingface/datasets
null
1,659,610,705
https://api.github.com/repos/huggingface/datasets/issues/5720/comments
I_kwDODunzps5i66ZR
null
5,720
https://api.github.com/repos/huggingface/datasets/issues/5720/events
false
closed
2023-04-07T21:04:08Z
null
https://api.github.com/repos/huggingface/datasets/issues/5719
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5719/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5719/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4", "events_url": "https://api.github.com/users/off99555/events{/privacy}", "followers_url": "https://api.github.com/users/off99555/followers", "following_url": "https://api.github.com/users/off99555/following{/other_user}", "gists_url": "https://api.github.com/users/off99555/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/off99555", "id": 15215732, "login": "off99555", "node_id": "MDQ6VXNlcjE1MjE1NzMy", "organizations_url": "https://api.github.com/users/off99555/orgs", "received_events_url": "https://api.github.com/users/off99555/received_events", "repos_url": "https://api.github.com/users/off99555/repos", "site_admin": false, "starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/off99555/subscriptions", "type": "User", "url": "https://api.github.com/users/off99555" }
https://github.com/huggingface/datasets/issues/5719
[]
false
2023-04-20T15:34:41Z
2023-04-20T15:34:41Z
null
[ "Hi! \r\n\r\nYou need to set the format to `np` before indexing the dataset to get NumPy arrays:\r\n```python\r\nfeatures = Features(dict(seq=Array2D((2,2), 'float32'))) \r\nds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)\r\nds.set_format(\"np\")\r\na = ds[0]['seq']\r\n```\r\n\r\n> I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?\r\n\r\nThe same dataset can have examples in different types (Numpy arrays, Torch tensors, Pandas series, etc.), so recovering them all would be slow and impractical. Instead, the design of our formatting API is similar to Arrow's (the lib we use internally to store data on disk/ in RAM), which allows converting a batch of data to Python/Numpy/Pandas in a single call (and uses C++ to do so to make it faster).\r\n\r\n> Also if I change the first dimension of the Array2D shape to None, it's returning array correctly.\r\n\r\nSetting the first dimension to `None` makes it variable-length (allows passing arrays with the first dimensions of differing lengths).\r\n", "Current behavior when indexing the dataset:\r\n- Using `Array((2,2))` returns a list of lists.\r\n- Using `Array((None,2))` returns a numpy array.\r\n\r\nDon't you think this is kind of unexpected behavior from end-user perspective? \r\nAs a user, I expect that when I use `Array2D`, the behavior needs to be consistent even if I specify None or not. It should either return a list or an array. It needs to choose one. Let's say if it always return a list, then I will call `ds.set_format('np')` no problem.\r\n\r\nThe consistency can be in any of these aspects:\r\n1. preserves the type of the input data (in this case, a numpy array)\r\n2. ensure the output type is always the same (it can be either list or array, but it needs to be one of them)\r\n\r\nRight now the API doesn't conform to any of these aspects. But I think it needs to conform to one.", "I thought we made this consistent by returning lists in both scenarios...", "Fixed in #5751 " ]
completed
[]
Array2D feature creates a list of list instead of a numpy array
NONE
https://api.github.com/repos/huggingface/datasets/issues/5719/timeline
### Describe the bug I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list? Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly. ### Steps to reproduce the bug Run this code: ```py from datasets import Dataset, Features, Array2D import numpy as np # you have to change the first dimension of the shape to None to make it return an array features = Features(dict(seq=Array2D((2,2), 'float32'))) ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features) a = ds[0]['seq'] print(a) print(type(a)) ``` The following will be printed in stdout: ``` [[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]] <class 'list'> ``` ### Expected behavior Each indexed item should be a list or numpy array. Currently, `Array((2,2))` yields a list but `Array((None,2))` yields an array. ### Environment info - `datasets` version: 2.11.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.13 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 1.4.4
https://api.github.com/repos/huggingface/datasets
null
1,659,203,222
https://api.github.com/repos/huggingface/datasets/issues/5719/comments
I_kwDODunzps5i5W6W
null
5,719
https://api.github.com/repos/huggingface/datasets/issues/5719/events
false
closed
2023-04-07T16:01:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/5718
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5718/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5718/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5718
[]
false
2023-04-27T14:43:13Z
2023-04-27T14:35:52Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718\r\n```\r\nFAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']\r\n At index 0 diff: 'random' != 'train'\r\n Full diff:\r\n - ['train', 'random']\r\n + ['random', 'train']\r\n```\r\nI have checked locally and found out that the data split order is nondeterministic. I am addressing this in a separate issue.\r\n\r\nWe should first address:\r\n- #5728 \r\n- #5729", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007728 / 0.011353 (-0.003624) | 0.005275 / 0.011008 (-0.005734) | 0.097708 / 0.038508 (0.059199) | 0.039851 / 0.023109 (0.016741) | 0.333360 / 0.275898 (0.057462) | 0.376135 / 0.323480 (0.052655) | 0.006355 / 0.007986 (-0.001630) | 0.004193 / 0.004328 (-0.000135) | 0.072882 / 0.004250 (0.068631) | 0.052668 / 0.037052 (0.015615) | 0.347359 / 0.258489 (0.088870) | 0.382280 / 0.293841 (0.088440) | 0.035996 / 0.128546 (-0.092550) | 0.012517 / 0.075646 (-0.063129) | 0.334520 / 0.419271 (-0.084751) | 0.051969 / 0.043533 (0.008436) | 0.335735 / 0.255139 (0.080596) | 0.359921 / 0.283200 (0.076722) | 0.113971 / 0.141683 (-0.027712) | 1.465636 / 1.452155 (0.013481) | 1.559824 / 1.492716 (0.067108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223997 / 0.018006 (0.205991) | 0.499041 / 0.000490 (0.498551) | 0.009697 / 0.000200 (0.009497) | 0.000245 / 0.000054 (0.000190) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027031 / 0.037411 (-0.010381) | 0.110271 / 0.014526 (0.095745) | 0.115848 / 0.176557 (-0.060709) | 0.174253 / 0.737135 (-0.562883) | 0.122616 / 0.296338 (-0.173723) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417275 / 0.215209 (0.202066) | 4.158678 / 2.077655 (2.081023) | 1.917585 / 1.504120 (0.413465) | 1.722219 / 1.541195 (0.181025) | 1.813284 / 1.468490 (0.344793) | 0.707193 / 4.584777 (-3.877584) | 3.853545 / 3.745712 (0.107833) | 3.369240 / 5.269862 (-1.900621) | 1.820264 / 4.565676 (-2.745412) | 0.087340 / 0.424275 (-0.336936) | 0.012305 / 0.007607 (0.004698) | 0.520326 / 0.226044 (0.294281) | 5.107383 / 2.268929 (2.838455) | 2.413977 / 55.444624 (-53.030647) | 2.074356 / 6.876477 (-4.802121) | 2.255959 / 2.142072 (0.113887) | 0.849850 / 4.805227 (-3.955377) | 0.170116 / 6.500664 (-6.330548) | 0.067203 / 0.075469 (-0.008267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168158 / 1.841788 (-0.673629) | 15.046312 / 8.074308 (6.972004) | 15.113924 / 10.191392 (4.922532) | 0.145288 / 0.680424 (-0.535136) | 0.017959 / 0.534201 (-0.516242) | 0.424666 / 0.579283 (-0.154617) | 0.422560 / 0.434364 (-0.011804) | 0.526386 / 0.540337 (-0.013952) | 0.623755 / 1.386936 (-0.763181) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007676 / 0.011353 (-0.003677) | 0.005240 / 0.011008 (-0.005769) | 0.074668 / 0.038508 (0.036160) | 0.035570 / 0.023109 (0.012461) | 0.348524 / 0.275898 (0.072626) | 0.378157 / 0.323480 (0.054677) | 0.006112 / 0.007986 (-0.001873) | 0.005641 / 0.004328 (0.001312) | 0.073536 / 0.004250 (0.069286) | 0.048651 / 0.037052 (0.011599) | 0.359282 / 0.258489 (0.100793) | 0.385961 / 0.293841 (0.092120) | 0.035417 / 0.128546 (-0.093129) | 0.012227 / 0.075646 (-0.063419) | 0.085725 / 0.419271 (-0.333546) | 0.049651 / 0.043533 (0.006118) | 0.344122 / 0.255139 (0.088983) | 0.364795 / 0.283200 (0.081595) | 0.112711 / 0.141683 (-0.028972) | 1.426823 / 1.452155 (-0.025332) | 1.534745 / 1.492716 (0.042029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201728 / 0.018006 (0.183721) | 0.448533 / 0.000490 (0.448043) | 0.003554 / 0.000200 (0.003354) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030917 / 0.037411 (-0.006494) | 0.117966 / 0.014526 (0.103440) | 0.125954 / 0.176557 (-0.050602) | 0.176382 / 0.737135 (-0.560753) | 0.130757 / 0.296338 (-0.165582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422167 / 0.215209 (0.206958) | 4.213948 / 2.077655 (2.136294) | 2.040049 / 1.504120 (0.535929) | 1.858317 / 1.541195 (0.317122) | 1.937108 / 1.468490 (0.468618) | 0.707797 / 4.584777 (-3.876979) | 3.831061 / 3.745712 (0.085349) | 3.373711 / 5.269862 (-1.896151) | 1.590343 / 4.565676 (-2.975333) | 0.086672 / 0.424275 (-0.337603) | 0.012429 / 0.007607 (0.004821) | 0.520269 / 0.226044 (0.294225) | 5.207285 / 2.268929 (2.938357) | 2.518107 / 55.444624 (-52.926517) | 2.230696 / 6.876477 (-4.645781) | 2.363164 / 2.142072 (0.221091) | 0.836749 / 4.805227 (-3.968479) | 0.169676 / 6.500664 (-6.330988) | 0.065766 / 0.075469 (-0.009703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251195 / 1.841788 (-0.590592) | 15.196091 / 8.074308 (7.121782) | 14.991600 / 10.191392 (4.800208) | 0.165335 / 0.680424 (-0.515089) | 0.017789 / 0.534201 (-0.516412) | 0.433863 / 0.579283 (-0.145420) | 0.428660 / 0.434364 (-0.005704) | 0.527385 / 0.540337 (-0.012952) | 0.628067 / 1.386936 (-0.758869) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d06b8c21ba98ae85971a2b1d135ac2ef035b59c9 \"CML watermark\")\n" ]
null
[]
Reorder default data splits to have validation before test
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5718/timeline
This PR reorders data splits, so that by default validation appears before test. The default order becomes: [train, validation, test] instead of [train, test, validation].
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5718.diff", "html_url": "https://github.com/huggingface/datasets/pull/5718", "merged_at": "2023-04-27T14:35:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/5718.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5718" }
1,658,958,406
https://api.github.com/repos/huggingface/datasets/issues/5718/comments
PR_kwDODunzps5N2IZC
null
5,718
https://api.github.com/repos/huggingface/datasets/issues/5718/events
true
open
2023-04-07T11:59:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/5717
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5717/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5717/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
https://github.com/huggingface/datasets/issues/5717
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
false
2023-11-08T11:08:18Z
null
null
[ "Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately if it can help to better debug.", "Hi! I didn't manage to reproduce this behavior, so sharing the dataset with us would help a lot. \r\n\r\n> My dataset is around 50K images, is this error might be due to a bad image?\r\n\r\nThis shouldn't be the case as we save raw data to disk without decoding it.", "OK, thanks! The dataset is currently hosted on a gcs bucket. How would you like to proceed for sharing the link? ", "You could follow [this](https://cloud.google.com/storage/docs/collaboration#browser) procedure or upload the dataset to Google Drive (50K images is not that much unless high-res) and send me an email with the link.", "Thanks @mariosasko. I just sent you the GDrive link.", "Thanks @jplu! I managed to reproduce the `TypeError` - it stems from [this](https://github.com/huggingface/datasets/blob/e3f4f124a1b118a5bfff5bae76b25a68aedbebbc/src/datasets/features/image.py#L258-L264) line, which can return a `ChunkedArray` (its mask is also chunked then, which Arrow does not allow) when the embedded data is too big to fit in a standard `Array`.\r\n\r\nI'm working on a fix.", "@yairl-dn You should be able to bypass this issue by reducing `datasets.config.DEFAULT_MAX_BATCH_SIZE` (1000 by default)\r\n\r\nIn Datasets 3.0, the Image storage format will be simplified, so this should be easier to fix then.", "The same error occurs with my save_to_disk() of Audio() items. I still get it with:\r\n```python\r\nimport datasets\r\ndatasets.config.DEFAULT_MAX_BATCH_SIZE=35\r\nfrom datasets import Features, Array2D, Value, Dataset, Sequence, Audio\r\n```\r\n\r\n```\r\nSaving the dataset (41/47 shards): 88%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 297/339 [01:21<00:11, 3.65 examples/s]\r\nTraceback (most recent call last):\r\nFile \"/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py\", line 155, in <module>\r\ncreate_dataset(args)\r\nFile \"/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py\", line 137, in create_dataset\r\nhf_dataset.save_to_disk(args.outds)\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_dataset.py\", line 1532, in save_to_disk\r\nfor job_id, done, content in Dataset._save_to_disk_single(**kwargs):\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_dataset.py\", line 1563, in _save_to_disk_single\r\nwriter.write_table(pa_table)\r\nFile \"/home/j/src/py/datasets/src/datasets/arrow_writer.py\", line 574, in write_table\r\npa_table = embed_table_storage(pa_table)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2307, in embed_table_storage\r\narrays = [\r\n^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2308, in <listcomp>\r\nembed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 1831, in wrapper\r\nreturn pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 1831, in <listcomp>\r\nreturn pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/table.py\", line 2177, in embed_array_storage\r\nreturn feature.embed_storage(array)\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/home/j/src/py/datasets/src/datasets/features/audio.py\", line 276, in embed_storage\r\nstorage = pa.StructArray.from_arrays([bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null())\r\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"pyarrow/array.pxi\", line 2850, in pyarrow.lib.StructArray.from_arrays\r\nFile \"pyarrow/array.pxi\", line 3290, in pyarrow.lib.c_mask_inverted_from_obj\r\nTypeError: Mask must be a pyarrow.Array of type boolean\r\n```" ]
null
[]
Errror when saving to disk a dataset of images
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5717/timeline
### Describe the bug Hello! I have an issue when I try to save on disk my dataset of images. The error I get is: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_to_disk for job_id, done, content in Dataset._save_to_disk_single(**kwargs): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1473, in _save_to_disk_single writer.write_table(pa_table) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_writer.py", line 570, in write_table pa_table = embed_table_storage(pa_table) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2268, in embed_table_storage arrays = [ File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2269, in <listcomp> embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name] File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2142, in embed_array_storage return feature.embed_storage(array) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/features/image.py", line 269, in embed_storage storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) File "pyarrow/array.pxi", line 2766, in pyarrow.lib.StructArray.from_arrays File "pyarrow/array.pxi", line 2961, in pyarrow.lib.c_mask_inverted_from_obj TypeError: Mask must be a pyarrow.Array of type boolean ``` My dataset is around 50K images, is this error might be due to a bad image? Thanks for the help. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset["train"].save_to_disk("./myds", num_shards=40) ``` ### Expected behavior Having my dataset properly saved to disk. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
https://api.github.com/repos/huggingface/datasets
null
1,658,729,866
https://api.github.com/repos/huggingface/datasets/issues/5717/comments
I_kwDODunzps5i3jWK
null
5,717
https://api.github.com/repos/huggingface/datasets/issues/5717/events
false
closed
2023-04-07T09:51:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/5716
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/v-yunbin", "id": 38179632, "login": "v-yunbin", "node_id": "MDQ6VXNlcjM4MTc5NjMy", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "repos_url": "https://api.github.com/users/v-yunbin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "type": "User", "url": "https://api.github.com/users/v-yunbin" }
https://github.com/huggingface/datasets/issues/5716
[]
false
2023-09-27T17:47:08Z
2023-09-27T17:47:08Z
null
[ "Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example({\"path\": \"empty.wav\", \"bytes\": None})\r\n```\r\nBut without success.\r\n\r\nAlso, what version of `librosa` is installed in your env? (You can get this info with `python -c \"import librosa; print(librosa.__version__)`)\r\n\r\n", "I'm closing this issue as the reproducer hasn't been provided." ]
completed
[]
Handle empty audio
NONE
https://api.github.com/repos/huggingface/datasets/issues/5716/timeline
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path? when a audio is empty, when do resample , it will break: `array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)`
https://api.github.com/repos/huggingface/datasets
null
1,658,613,092
https://api.github.com/repos/huggingface/datasets/issues/5716/comments
I_kwDODunzps5i3G1k
null
5,716
https://api.github.com/repos/huggingface/datasets/issues/5716/events
false
closed
2023-04-06T13:57:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/5715
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4", "events_url": "https://api.github.com/users/jungbaepark/events{/privacy}", "followers_url": "https://api.github.com/users/jungbaepark/followers", "following_url": "https://api.github.com/users/jungbaepark/following{/other_user}", "gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jungbaepark", "id": 34066771, "login": "jungbaepark", "node_id": "MDQ6VXNlcjM0MDY2Nzcx", "organizations_url": "https://api.github.com/users/jungbaepark/orgs", "received_events_url": "https://api.github.com/users/jungbaepark/received_events", "repos_url": "https://api.github.com/users/jungbaepark/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions", "type": "User", "url": "https://api.github.com/users/jungbaepark" }
https://github.com/huggingface/datasets/issues/5715
[]
false
2023-04-20T17:16:26Z
2023-04-20T17:16:26Z
null
[ "Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n " ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
NONE
https://api.github.com/repos/huggingface/datasets/issues/5715/timeline
### Feature request There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader: Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict". https://github.com/pytorch/pytorch/issues/13246 With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue. However, this issue can be released when the returning output is fixed in length. Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list. The design would be good when we load datasets as ```python load_dataset(..., with_return_as_fixed_tensor=True) ``` ### Motivation The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 : Numpy or Pandas seems not to have problems, while both have the string type. (I'm not sure that the sequence of huggingface datasets can solve this problem as well) ### Your contribution I'll read it ! thanks
https://api.github.com/repos/huggingface/datasets
null
1,657,479,788
https://api.github.com/repos/huggingface/datasets/issues/5715/comments
I_kwDODunzps5iyyJs
null
5,715
https://api.github.com/repos/huggingface/datasets/issues/5715/events
false
closed
2023-04-06T13:01:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/5714
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5714/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/5714/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/5714
[]
false
2023-04-07T09:23:54Z
2023-04-07T09:16:57Z
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.004406 / 0.011008 (-0.006602) | 0.097136 / 0.038508 (0.058628) | 0.027711 / 0.023109 (0.004601) | 0.303092 / 0.275898 (0.027194) | 0.336804 / 0.323480 (0.013324) | 0.004838 / 0.007986 (-0.003148) | 0.004533 / 0.004328 (0.000204) | 0.075062 / 0.004250 (0.070812) | 0.035105 / 0.037052 (-0.001947) | 0.310245 / 0.258489 (0.051756) | 0.347086 / 0.293841 (0.053245) | 0.030867 / 0.128546 (-0.097679) | 0.011436 / 0.075646 (-0.064211) | 0.320728 / 0.419271 (-0.098544) | 0.042303 / 0.043533 (-0.001230) | 0.308177 / 0.255139 (0.053038) | 0.333673 / 0.283200 (0.050473) | 0.084736 / 0.141683 (-0.056947) | 1.477391 / 1.452155 (0.025237) | 1.530399 / 1.492716 (0.037682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212698 / 0.018006 (0.194692) | 0.409098 / 0.000490 (0.408608) | 0.004202 / 0.000200 (0.004002) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022725 / 0.037411 (-0.014686) | 0.095866 / 0.014526 (0.081340) | 0.104153 / 0.176557 (-0.072404) | 0.162964 / 0.737135 (-0.574171) | 0.106505 / 0.296338 (-0.189834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431336 / 0.215209 (0.216127) | 4.283290 / 2.077655 (2.205635) | 1.982418 / 1.504120 (0.478298) | 1.762104 / 1.541195 (0.220909) | 1.807528 / 1.468490 (0.339038) | 0.695507 / 4.584777 (-3.889270) | 3.376299 / 3.745712 (-0.369413) | 1.856642 / 5.269862 (-3.413219) | 1.154258 / 4.565676 (-3.411419) | 0.082749 / 0.424275 (-0.341526) | 0.012289 / 0.007607 (0.004682) | 0.525842 / 0.226044 (0.299798) | 5.285764 / 2.268929 (3.016835) | 2.389926 / 55.444624 (-53.054698) | 2.021830 / 6.876477 (-4.854646) | 2.107460 / 2.142072 (-0.034612) | 0.808118 / 4.805227 (-3.997109) | 0.150791 / 6.500664 (-6.349873) | 0.065825 / 0.075469 (-0.009644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206939 / 1.841788 (-0.634849) | 13.795902 / 8.074308 (5.721594) | 14.107950 / 10.191392 (3.916558) | 0.144300 / 0.680424 (-0.536124) | 0.016478 / 0.534201 (-0.517723) | 0.379395 / 0.579283 (-0.199888) | 0.388437 / 0.434364 (-0.045927) | 0.451443 / 0.540337 (-0.088894) | 0.523142 / 1.386936 (-0.863794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006503 / 0.011353 (-0.004850) | 0.004578 / 0.011008 (-0.006430) | 0.076278 / 0.038508 (0.037770) | 0.028052 / 0.023109 (0.004943) | 0.337873 / 0.275898 (0.061975) | 0.371368 / 0.323480 (0.047888) | 0.005086 / 0.007986 (-0.002899) | 0.003354 / 0.004328 (-0.000975) | 0.076876 / 0.004250 (0.072625) | 0.039146 / 0.037052 (0.002093) | 0.340299 / 0.258489 (0.081810) | 0.381209 / 0.293841 (0.087368) | 0.031771 / 0.128546 (-0.096775) | 0.011670 / 0.075646 (-0.063976) | 0.085156 / 0.419271 (-0.334116) | 0.041990 / 0.043533 (-0.001543) | 0.338644 / 0.255139 (0.083505) | 0.362461 / 0.283200 (0.079262) | 0.089772 / 0.141683 (-0.051911) | 1.480341 / 1.452155 (0.028187) | 1.562815 / 1.492716 (0.070099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205700 / 0.018006 (0.187694) | 0.402206 / 0.000490 (0.401716) | 0.001212 / 0.000200 (0.001012) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025172 / 0.037411 (-0.012240) | 0.100959 / 0.014526 (0.086433) | 0.108464 / 0.176557 (-0.068093) | 0.161321 / 0.737135 (-0.575814) | 0.114245 / 0.296338 (-0.182093) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437425 / 0.215209 (0.222216) | 4.362212 / 2.077655 (2.284557) | 2.068815 / 1.504120 (0.564695) | 1.864089 / 1.541195 (0.322894) | 1.909038 / 1.468490 (0.440548) | 0.696097 / 4.584777 (-3.888680) | 3.358628 / 3.745712 (-0.387084) | 2.999085 / 5.269862 (-2.270777) | 1.533917 / 4.565676 (-3.031760) | 0.083010 / 0.424275 (-0.341266) | 0.012372 / 0.007607 (0.004765) | 0.539926 / 0.226044 (0.313882) | 5.438326 / 2.268929 (3.169397) | 2.498581 / 55.444624 (-52.946043) | 2.153359 / 6.876477 (-4.723117) | 2.177891 / 2.142072 (0.035819) | 0.803169 / 4.805227 (-4.002059) | 0.151079 / 6.500664 (-6.349585) | 0.065981 / 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336682 / 1.841788 (-0.505106) | 14.133055 / 8.074308 (6.058747) | 14.033972 / 10.191392 (3.842580) | 0.152109 / 0.680424 (-0.528315) | 0.016475 / 0.534201 (-0.517726) | 0.387808 / 0.579283 (-0.191475) | 0.378347 / 0.434364 (-0.056017) | 0.484732 / 0.540337 (-0.055606) | 0.569907 / 1.386936 (-0.817029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1c4ec00511868bd881e84a6f7e0333648d833b8e \"CML watermark\")\n" ]
null
[]
Fix xnumpy_load for .npz files
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5714/timeline
PR: - #5626 implemented support for streaming `.npy` files by using `numpy.load`. However, it introduced a bug when used with `.npz` files, within a context manager: ``` ValueError: seek of closed file ``` or in streaming mode: ``` ValueError: I/O operation on closed file. ``` This PR fixes the bug and tests for both `.npy` and `.npz` files. Fix #5711.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/5714.diff", "html_url": "https://github.com/huggingface/datasets/pull/5714", "merged_at": "2023-04-07T09:16:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5714" }
1,657,388,033
https://api.github.com/repos/huggingface/datasets/issues/5714/comments
PR_kwDODunzps5NxIOc
null
5,714
https://api.github.com/repos/huggingface/datasets/issues/5714/events
true
closed
2023-04-06T10:27:22Z
null
https://api.github.com/repos/huggingface/datasets/issues/5713
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
https://github.com/huggingface/datasets/issues/5713
[]
false
2023-04-06T13:06:22Z
2023-04-06T13:06:21Z
null
[ "Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. The estimation is currently done using the first samples of the dataset (which can surely be improved). We should probably open an issue to fix this once and for all.\r\n\r\nAnyway for your specific dataset I'd suggest you to pass `num_shards` instead of `max_shard_size` for now, and make sure to have enough shards to end up with shards smaller than 2GB", "Hi Quentin! Thanks a lot! Using `num_shards` instead of `max_shard_size` works as expected.\r\n\r\nIndeed the way you describe how the size is computed cannot really work with the dataset I'm building as all the image doesn't have the same resolution and then size. Opening an issue on this might be a good idea." ]
completed
[]
ArrowNotImplementedError when loading dataset from the hub
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5713/timeline
### Describe the bug Hello, I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error: ``` Traceback (most recent call last): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single for _, table in generator: File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug Create the dataset and push it to the hub: ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB") ``` Then use it: ```python from datasets import load_dataset dataset = load_dataset("org/dataset-name") ``` ### Expected behavior To properly download and use the pushed dataset. Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
https://api.github.com/repos/huggingface/datasets
null
1,657,141,251
https://api.github.com/repos/huggingface/datasets/issues/5713/comments
I_kwDODunzps5ixfgD
null
5,713
https://api.github.com/repos/huggingface/datasets/issues/5713/events
false
closed
2023-04-05T16:47:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/5712
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
https://github.com/huggingface/datasets/issues/5712
[]
false
2023-04-06T08:32:37Z
2023-04-05T17:17:44Z
null
[ "Closing since this is a duplicate of #5711", "> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate" ]
completed
[]
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
NONE
https://api.github.com/repos/huggingface/datasets/issues/5712/timeline
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
https://api.github.com/repos/huggingface/datasets
null
1,655,972,106
https://api.github.com/repos/huggingface/datasets/issues/5712/comments
I_kwDODunzps5itCEK
null
5,712
https://api.github.com/repos/huggingface/datasets/issues/5712/events
false
closed
2023-04-05T16:46:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/5711
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rcasero", "id": 1219084, "login": "rcasero", "node_id": "MDQ6VXNlcjEyMTkwODQ=", "organizations_url": "https://api.github.com/users/rcasero/orgs", "received_events_url": "https://api.github.com/users/rcasero/received_events", "repos_url": "https://api.github.com/users/rcasero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "type": "User", "url": "https://api.github.com/users/rcasero" }
https://github.com/huggingface/datasets/issues/5711
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-04-07T09:16:59Z
2023-04-07T09:16:59Z
null
[ "It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```python\r\nreturn np.load(xopen(filepath_or_buffer, \"rb\", use_auth_token=use_auth_token), *args, **kwargs)\r\n```\r\nshould fix the issue.\r\n\r\n(Maybe this is also worth doing a patch release afterward)", "Thanks for reporting, @rcasero.\r\n\r\nI can have a look..." ]
completed
[]
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
NONE
https://api.github.com/repos/huggingface/datasets/issues/5711/timeline
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(embedding_filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
https://api.github.com/repos/huggingface/datasets
null
1,655,971,647
https://api.github.com/repos/huggingface/datasets/issues/5711/comments
I_kwDODunzps5itB8_
null
5,711
https://api.github.com/repos/huggingface/datasets/issues/5711/events
false
closed
2023-04-05T14:11:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/5710
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Saibo-creator", "id": 53392976, "login": "Saibo-creator", "node_id": "MDQ6VXNlcjUzMzkyOTc2", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "type": "User", "url": "https://api.github.com/users/Saibo-creator" }
https://github.com/huggingface/datasets/issues/5710
[]
false
2023-04-20T17:16:40Z
2023-04-20T17:16:40Z
null
[ "Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they are more experienced on this matter). Also, googling \"mmap cannot allocate memory\" returns some approaches to solving this problem." ]
completed
[]
OSError: Memory mapping file failed: Cannot allocate memory
NONE
https://api.github.com/repos/huggingface/datasets/issues/5710/timeline
### Describe the bug Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB. When I trying to load all the 600 datasets into memory, I get the above error message. Is this normal because I'm hitting the max size of memory mapping of the OS? Thank you ```terminal 0_21/cache-e9c42499f65b1881.arrow load_hf_datasets_from_disk: 82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 494/600 [07:26<01:35, 1.11it/s] Traceback (most recent call last): File "example_load_genkalm_dataset.py", line 35, in <module> multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay) File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length, File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset hf_ds = load_from_disk(path_or_name) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk arrow_table = concat_tables( File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables tables = list(tables) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr> table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix()) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file memory_mapped_stream = pa.memory_map(filename) File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ``` ### Steps to reproduce the bug Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share. ### Expected behavior I expect the 3TB of data can be fully mapped to memory ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyArrow version: 11.0.0 - Pandas version: 1.0.1
https://api.github.com/repos/huggingface/datasets
null
1,655,703,534
https://api.github.com/repos/huggingface/datasets/issues/5710/comments
I_kwDODunzps5isAfu
null
5,710
https://api.github.com/repos/huggingface/datasets/issues/5710/events
false
closed
2023-04-05T11:15:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/5709
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jplu", "id": 959590, "login": "jplu", "node_id": "MDQ6VXNlcjk1OTU5MA==", "organizations_url": "https://api.github.com/users/jplu/orgs", "received_events_url": "https://api.github.com/users/jplu/received_events", "repos_url": "https://api.github.com/users/jplu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "type": "User", "url": "https://api.github.com/users/jplu" }
https://github.com/huggingface/datasets/issues/5709
[]
false
2023-04-06T08:52:20Z
2023-04-06T08:52:19Z
null
[ "hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually weird that when you push your dataset to the Hub, a `dataset_infos.json` file is created, because this file is deprecated and it should create `README.md` with the `dataset_info` field instead. Some keys are also deprecated, like \"supervised_keys\" and \"task_templates\".\r\n\r\nCan you please provide a toy reproducible example of how you create and push the dataset? And also why do you want to change this file, especially the number of bytes and examples?", "Hi @polinaeterna Yes I have created the dataset with `Dataset.from_dict` applied some updates afterward and when I pushed to the hub I had a `dataset_infos.json` file and there was a `README.md` file as well.\r\n\r\nI didn't know that the JSON file was deprecated. So I have built my dataset with `ImageBuilder` instead and now it works like a charm without having to touch anything.\r\n\r\nI haven't succeed to reproduce the creation of the JSON file with a toy example, hence, I certainly did some mistakes when I have manipulated my dataset manually at first. My bad." ]
completed
[]
Manually dataset info made not taken into account
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/5709/timeline
### Describe the bug Hello, I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated. Former `dataset_infos.json` file: ``` {"default": { "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "_type": "Image" }, "labels": { "names": [ "Fake", "Real" ], "_type": "ClassLabel" } }, "splits": { "validation": { "name": "validation", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null }, "train": { "name": "train", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null } }, "download_size": 1802008414, "dataset_size": 1802020188.0, "size_in_bytes": 3604028602.0 }} ``` After I update it manually it looks like: ``` { "bstrai--deepfake-detection":{ "description":"", "citation":"", "homepage":"", "license":"", "features":{ "image":{ "decode":true, "id":null, "_type":"Image" }, "labels":{ "num_classes":2, "names":[ "Fake", "Real" ], "id":null, "_type":"ClassLabel" } }, "supervised_keys":{ "input":"image", "output":"labels" }, "task_templates":[ { "task":"image-classification", "image_column":"image", "label_column":"labels" } ], "config_name":null, "splits":{ "validation":{ "name":"validation", "num_bytes":36627822, "num_examples":123, "dataset_name":"deepfake-detection" }, "train":{ "name":"train", "num_bytes":901023694, "num_examples":3200, "dataset_name":"deepfake-detection" } }, "download_checksums":null, "download_size":937562209, "dataset_size":937651516, "size_in_bytes":1875213725 } } ``` Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet? Thanks! ### Steps to reproduce the bug - ### Expected behavior - ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
https://api.github.com/repos/huggingface/datasets
null
1,655,423,503
https://api.github.com/repos/huggingface/datasets/issues/5709/comments
I_kwDODunzps5iq8IP
null
5,709
https://api.github.com/repos/huggingface/datasets/issues/5709/events
false
closed
2023-04-05T06:36:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/5708
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/5708
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2023-12-21T10:20:28Z
2023-12-21T10:20:27Z
null
[ "Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5", "looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`", "I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.", "yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n", "I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files", "Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example", "First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.", "The bulk edit parsed 751 canonical datasets and updated 166.", "Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 aΜ€ 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n", "I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [x] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [x] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [x] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6", "should we force merge the PR and close this issue?", "I merged the PRs for \"scicite\" and \"scifact\"." ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
Dataset sizes are in MiB instead of MB in dataset cards
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929): Now we show the dataset size: - from the dataset card (in the side column) - from the datasets-server (in the viewer) But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932) <img width="664" alt="Capture d’écran 2023-04-04 aΜ€ 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png"> TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:` - [x] Bulk edit on the Hub to fix this in all canonical datasets - [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
https://api.github.com/repos/huggingface/datasets
null
1,655,023,642
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
I_kwDODunzps5ipaga
null
5,708
https://api.github.com/repos/huggingface/datasets/issues/5708/events
false
open
2023-04-04T09:45:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/5706
{ "avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4", "events_url": "https://api.github.com/users/mhattingpete/events{/privacy}", "followers_url": "https://api.github.com/users/mhattingpete/followers", "following_url": "https://api.github.com/users/mhattingpete/following{/other_user}", "gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mhattingpete", "id": 22622299, "login": "mhattingpete", "node_id": "MDQ6VXNlcjIyNjIyMjk5", "organizations_url": "https://api.github.com/users/mhattingpete/orgs", "received_events_url": "https://api.github.com/users/mhattingpete/received_events", "repos_url": "https://api.github.com/users/mhattingpete/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions", "type": "User", "url": "https://api.github.com/users/mhattingpete" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kklemon", "id": 1430243, "login": "kklemon", "node_id": "MDQ6VXNlcjE0MzAyNDM=", "organizations_url": "https://api.github.com/users/kklemon/orgs", "received_events_url": "https://api.github.com/users/kklemon/received_events", "repos_url": "https://api.github.com/users/kklemon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "type": "User", "url": "https://api.github.com/users/kklemon" }
https://github.com/huggingface/datasets/issues/5706
[ { "avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4", "events_url": "https://api.github.com/users/mhattingpete/events{/privacy}", "followers_url": "https://api.github.com/users/mhattingpete/followers", "following_url": "https://api.github.com/users/mhattingpete/following{/other_user}", "gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mhattingpete", "id": 22622299, "login": "mhattingpete", "node_id": "MDQ6VXNlcjIyNjIyMjk5", "organizations_url": "https://api.github.com/users/mhattingpete/orgs", "received_events_url": "https://api.github.com/users/mhattingpete/received_events", "repos_url": "https://api.github.com/users/mhattingpete/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions", "type": "User", "url": "https://api.github.com/users/mhattingpete" } ]
false
2023-09-22T16:53:37Z
null
null
[ "Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do", "@kklemon did you implement this? Otherwise I would like to give it a try", "@mhattingpete no, I hadn't time for this so far. Feel free to work on this.", "#self-assign", "This would be super useful, so +1. \r\n\r\nAlso, these prior issues/PRs seem relevant: \r\nhttps://github.com/huggingface/datasets/issues/1906\r\nhttps://github.com/huggingface/datasets/pull/1936", "Hi, this is a really useful feature, has this been implemented yet? ", "Hey folks -- I'm thinking about trying a PR for this. As far as I can tell the only sticky point is that auto-generation of features from a pyarrow schema will fail under the current `generate_from_arrow_type` function because there is no encoding of the categorical string label -> int map in the pa.dictionary type itself; that is stored with the full array. \r\n\r\nI see two ways to solve this. Option 1 is to require datasets with categorical types to use pyarrow schema metadata to encode the entire HF feature dictionary, that way categorical types don't ever need to be inferred from the pa type alone. The downside to this is that it means that these datasets will be a bit brittle, as if the feature encoding API ever changes, they will suddenly be unloadable. \r\n\r\nThe other option is to modify `generate_from_arrow_type` to take per-field metadata, and include just that metadata (the category labels) in the schema metadata. \r\n\r\nDoes anyone at HF have any preference on these two (or alternate) approaches?", "Maybe we don't need to store the string label -> int map in the categorical for the corresponding `datasets` feature ?", "I think that does need to be stored in the Feature object. Similar to how\r\n`ClassLabel` needs the class names for some of the provided default\r\nfunctionality (e.g., encoding or decoding values) here, a categorical\r\nfeature needs the same. Without storing that information, would you suggest\r\nthat categorical features just be stored internally as integer arrays?\r\n\r\nOn Fri, Sep 8, 2023, 5:37β€―AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Maybe we don't need to store the string label -> int map in the\r\n> categorical for the corresponding datasets feature ?\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711375652>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5XZV3RA4GBRVBLJN72LXZLROZANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Well IIRC you can concatenate two Arrow arrays with different dictionaries together. But for `datasets` would mean updating the `datasets` features when concatenating two arrays of the same type, which is not supported right now. That's why if there is a way to have it without storing the mapping in the feature object it would be nice.\r\n\r\nFor decoding we do have the string<->integer mapping from the array `dictionary` attribute so we're fine. For encoding I think it can work if we only encode when converting python objects to pyarrow in `TypedSequence.__arrow_array__` in `arow_writer.py`. It can work by converting the python objects to a pyarrow array and then use the `dictionary_encode` method.\r\n\r\nAnother concern about concatenation: I noticed **pyarrow creates the new dictionary and indices in memory** when concatenating two dictionary encoded arrays. This can be a problem for big datastets, and we should probably use ChunkedArray objects instead. This can surely be taken care of in `array_concat` in `table.py`\r\n\r\ncc @mariosasko in case you have other ideas\r\n\r\n", "Hmm, that is a good point. What if we implemented this feature first in a\r\nmanner that didn't allow concatenation of arrays with different index to\r\ncategory maps? Then concatenation would be very straightforward, and I\r\nthink this is reasonable if the index to category map is stored in the\r\nschema as well. Obviously, this is limited in how folks could use the\r\nfeature, but they can always fall back to raw strings if needed, and as\r\nusage increases we'll have more data to see what the right solution here\r\nis.\r\n\r\nOn Fri, Sep 8, 2023, 6:49β€―AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Well IIRC you can concatenate two Arrow arrays with different dictionaries\r\n> together. But for datasets would mean updating the datasets features when\r\n> concatenating two arrays of the same type, which is not supported right\r\n> now. That's why if there is a way to have it without storing the mapping in\r\n> the feature object it would be nice.\r\n>\r\n> For decoding we do have the string<->integer mapping from the array\r\n> dictionary attribute so we're fine. For encoding I think it can work if\r\n> we only encode when converting python objects to pyarrow in\r\n> TypedSequence.__arrow_array__ in arow_writer.py. It can work by\r\n> converting the python objects to a pyarrow array and then use the\r\n> dictionary_encode method.\r\n>\r\n> Another concern about concatenation: I noticed *pyarrow creates the new\r\n> dictionary and indices in memory* when concatenating two dictionary\r\n> encoded arrays. This can be a problem for big datastets, and we should\r\n> probably use ChunkedArray objects instead. This can surely be taken care of\r\n> in array_concat in table.py\r\n>\r\n> cc @mariosasko <https://github.com/mariosasko> in case you have other\r\n> ideas\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711468806>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X4E2KC2IXLDPYR3XZLXZLZ2FANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "@lhoestq @mariosasko just re-pinging on this so I can push forward further here. What are your thoughts on disallowing concatenation of categorical arrays for now such that the index to category map can be stored in the schema metadata? And/or other approaches that should be taken here?\r\n", "I think the easiest for now would be to add a `dictionary_decode` argument to the parquet loaders that would convert the dictionary type back to strings when set to `True`, and make `dictionary_decode=False` raise `NotImplementedError` for now if there are dictionary type columns. Would that be ok as a first step ?", "I mean, that would certainly be easiest but I don't think it really solves this issue in a meaningful way. This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types. Given that those savings are what is of real interest here, I think keeping it explicit that it is not supported (and forcing the user to do the conversion) might actually be better that way this problem stays top of mind.\r\n\r\nIs there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?", "> This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types.\r\n\r\nThere's already a ClassLabel type that does pretty much the same thing (store as integer instead of string) if it can help\r\n\r\n> Is there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?\r\n\r\nYea we do concatenation quite often (e.g. in `map`) so I don't think this is a viable option", "But how often in the cases where concatenation is done now would the\r\ncategorical label vocabulary actually change? I think it would be in\r\nbasically none of them. And in such cases, concatenation remains very easy,\r\nno?\r\n\r\nOn Fri, Sep 22, 2023, 12:02β€―PM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> This just changes the burden from string conversion from the user to HF\r\n> Datasets, but doesn't actually enable HF Datasets to take advantage of the\r\n> (very significant) storage and associated speed/memory savings offered by\r\n> using categorical types.\r\n>\r\n> There's already a ClassLabel type that does pretty much the same thing\r\n> (store as integer instead of string) if it can help\r\n>\r\n> Is there an objection with supporting categorical types explicitly through\r\n> the medium I outlined above, where the error is raised if you try to concat\r\n> two differently typed categorical columns?\r\n>\r\n> Yea we do concatenation quite often (e.g. in map) so I don't think this\r\n> is a viable option\r\n>\r\n> β€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1731667012>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X5CGWFXDCML6UKCWYLX3WZBXANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n", "Arrow IPC seems to require unified dictionaries anyway so actually we could surely focus only on this use case indeed @mmcdermott \r\n\r\nSo defining a new Feature type in `datasets` that contains the dictionary mapping should be fine (and concatenation would work out of the box), and it should also take care of checking that the data it encodes/decodes has the right dictionary. Do you think it can be done without impacting iterating speed for the other types @mariosasko ?\r\n\r\nRight now we have little bandwidth to work in this kind of things though" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Support categorical data types for Parquet
NONE
https://api.github.com/repos/huggingface/datasets/issues/5706/timeline
### Feature request Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns: ```python import pandas as pd import pyarrow.parquet as pq from datasets import load_dataset # Create categorical sample DataFrame df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category') df.to_parquet('data.parquet') # Read back as pyarrow table table = pq.read_table('data.parquet') print(table.schema) # type: dictionary<values=string, indices=int32, ordered=0> # Load with huggingface datasets load_dataset('parquet', data_files='data.parquet') ``` Error: ``` Traceback (most recent call last): File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single writer.write_table(table) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table self._build_writer(inferred_schema=pa_table.schema) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer inferred_features = Features.from_arrow_schema(inferred_schema) File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table NotImplementedError ``` ### Motivation Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature. ### Your contribution I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
https://api.github.com/repos/huggingface/datasets
null
1,653,545,835
https://api.github.com/repos/huggingface/datasets/issues/5706/comments
I_kwDODunzps5ijxtr
null
5,706
https://api.github.com/repos/huggingface/datasets/issues/5706/events
false