url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | draft
float64 | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7164
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7164/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7164/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7164/events
|
https://github.com/huggingface/datasets/issues/7164
| 2,544,757,297
|
I_kwDODunzps6Xreox
| 7,164
|
fsspec.exceptions.FSTimeoutError when downloading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38216460?v=4",
"events_url": "https://api.github.com/users/timonmerk/events{/privacy}",
"followers_url": "https://api.github.com/users/timonmerk/followers",
"following_url": "https://api.github.com/users/timonmerk/following{/other_user}",
"gists_url": "https://api.github.com/users/timonmerk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timonmerk",
"id": 38216460,
"login": "timonmerk",
"node_id": "MDQ6VXNlcjM4MjE2NDYw",
"organizations_url": "https://api.github.com/users/timonmerk/orgs",
"received_events_url": "https://api.github.com/users/timonmerk/received_events",
"repos_url": "https://api.github.com/users/timonmerk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timonmerk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timonmerk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timonmerk",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\n\r\nIn the meantime I can only recommend to try again later :/",
"Ok, still many thanks!",
"I'm also getting this same error but for `CSTR-Edinburgh/vctk`, so I don't think it's the remote host that's timing out, since I also time out at exactly 5 minutes. It seems there is a universal fsspec timeout that's getting hit starting in v3.",
"in v3 we cleaned the download parts of the library to make it more robust for HF downloads and to simplify support of script-based datasets. As a side effect it's not the same code that is used for other hosts, maybe time out handling changed. Anyway it should be possible to tweak fsspec to use retries\r\n\r\nFor example using [aiohttp_retry](https://github.com/inyutin/aiohttp_retry) maybe (haven't tried) ?\r\n\r\n```python\r\nimport fsspec\r\nfrom aiohttp_retry import RetryClient\r\n\r\nfsspec.filesystem(\"http\")._session = RetryClient()\r\n```\r\n\r\nrelated topic : https://github.com/huggingface/datasets/issues/7175",
"Adding a timeout argument to the `fs.get_file` call in `fsspec_get` in `datasets/utils/file_utils.py` might fix this ([source code](https://github.com/huggingface/datasets/blob/65f6eb54aa0e8bb44cea35deea28e0e8fecc25b9/src/datasets/utils/file_utils.py#L330)):\r\n\r\n```python\r\nfs.get_file(path, temp_file.name, callback=callback, timeout=3600)\r\n```\r\n\r\nSetting `timeout=1` fails after about one second, so setting it to 3600 should give us 1h. Havn't really tested this though. I'm also not sure what implications this has and if it causes errors for other `fs` implementations/configurations.\r\n\r\nThis is using `datasets==3.0.1` and Python 3.11.6.\r\n\r\n---\r\n\r\nEdit: This doesn't seem to change the timeout time, but add a second timeout counter (probably in `fsspec/asyn.py/sync`). So one can reduce the time for downloading like this, but not expand.\r\n\r\n---\r\n\r\nEdit 2: `fs` is of type `fsspec.implementations.http.HTTPFileSystem` which initializes a `aiohttp.ClientSession` using `client_kwargs`. We can pass these when calling `load_dataset`.\r\n\r\n**TLDR; This fixes it:**\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```",
"I've handled the issue like this to ensure smoother downloads when using the `datasets` library. \nIf modifying the library is not too inconvenient, this approach could be a good (but tentative) solution.\n\n### Changes Made\n\nModified `datasets.utils.file_utils.fsspec_get` to handle storage options and set a timeout:\n\n```python\ndef fsspec_get(url, temp_file, storage_options=None, desc=None, disable_tqdm=False):\n\n # ---> [ADD]\n if storage_options is None:\n storage_options = {}\n if \"client_kwargs\" not in storage_options:\n storage_options[\"client_kwargs\"] = {}\n storage_options[\"client_kwargs\"][\"timeout\"] = aiohttp.ClientTimeout(total=3600)\n # <---\n\n # The rest of the original code remains unchanged"
] | 2024-09-24T08:45:05Z
| 2025-04-09T22:25:56Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Downloading data: 61%|βββββββββββββββ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner
> result[0] = await coro
> ^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file
> chunk = await r.content.read(chunk_size)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read
> await self._wait("read")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait
> with self._timer:
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__
> raise asyncio.TimeoutError from None
> TimeoutError
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module>
> datasets.load_dataset("librispeech_asr", "clean")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset
> builder_instance.download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare
> self._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
> super()._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators
> archive_path = dl_manager.download(_DL_URLS[self.config.name])
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download
> downloaded_path_or_paths = map_nested(
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested
> _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested
> return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched
> self._download_single(url_or_filename, download_config=download_config)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single
> out = cached_path(url_or_filename, download_config=download_config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path
> output_path = get_from_cache(
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache
> fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get
> fs.get_file(path, temp_file.name, callback=callback)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper
> return sync(self.loop, func, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync
> raise FSTimeoutError from return_result
> fsspec.exceptions.FSTimeoutError
> Downloading data: 61%|βββββββββββββββ | 3.92G/6.39G [05:00<03:09, 13.0MB/s]
### Expected behavior
Complete the download
### Environment info
Python version 3.12.6
Dependencies:
> dependencies = [
> "accelerate>=0.34.2",
> "datasets[audio]>=3.0.0",
> "ipython>=8.18.1",
> "librosa>=0.10.2.post1",
> "torch>=2.4.1",
> "torchaudio>=2.4.1",
> "transformers>=4.44.2",
> ]
MacOS 14.6.1 (23G93)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7164/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7164/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6587
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6587/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6587/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6587/events
|
https://github.com/huggingface/datasets/pull/6587
| 2,080,348,016
|
PR_kwDODunzps5kAT_5
| 6,587
|
Allow concatenation of datasets with mixed structs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"friendly bump",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005403 / 0.011353 (-0.005950) | 0.003807 / 0.011008 (-0.007201) | 0.063850 / 0.038508 (0.025342) | 0.028242 / 0.023109 (0.005132) | 0.242866 / 0.275898 (-0.033032) | 0.266015 / 0.323480 (-0.057464) | 0.004111 / 0.007986 (-0.003875) | 0.002816 / 0.004328 (-0.001513) | 0.048862 / 0.004250 (0.044611) | 0.043036 / 0.037052 (0.005984) | 0.255149 / 0.258489 (-0.003340) | 0.280105 / 0.293841 (-0.013736) | 0.028182 / 0.128546 (-0.100365) | 0.010997 / 0.075646 (-0.064649) | 0.208131 / 0.419271 (-0.211141) | 0.036030 / 0.043533 (-0.007502) | 0.241551 / 0.255139 (-0.013588) | 0.260741 / 0.283200 (-0.022459) | 0.018045 / 0.141683 (-0.123638) | 1.175308 / 1.452155 (-0.276847) | 1.192160 / 1.492716 (-0.300556) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094579 / 0.018006 (0.076573) | 0.309850 / 0.000490 (0.309360) | 0.000232 / 0.000200 (0.000032) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019519 / 0.037411 (-0.017892) | 0.062201 / 0.014526 (0.047675) | 0.074017 / 0.176557 (-0.102539) | 0.121987 / 0.737135 (-0.615148) | 0.078958 / 0.296338 (-0.217380) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286306 / 0.215209 (0.071097) | 2.777004 / 2.077655 (0.699350) | 1.481445 / 1.504120 (-0.022675) | 1.348643 / 1.541195 (-0.192552) | 1.382257 / 1.468490 (-0.086234) | 0.571436 / 4.584777 (-4.013341) | 2.373279 / 3.745712 (-1.372433) | 2.749366 / 5.269862 (-2.520496) | 1.724937 / 4.565676 (-2.840739) | 0.062233 / 0.424275 (-0.362042) | 0.005013 / 0.007607 (-0.002594) | 0.339623 / 0.226044 (0.113579) | 3.385770 / 2.268929 (1.116842) | 1.832023 / 55.444624 (-53.612601) | 1.556172 / 6.876477 (-5.320305) | 1.573301 / 2.142072 (-0.568772) | 0.648866 / 4.805227 (-4.156361) | 0.121228 / 6.500664 (-6.379436) | 0.041684 / 0.075469 (-0.033786) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974595 / 1.841788 (-0.867192) | 11.519692 / 8.074308 (3.445383) | 9.773075 / 10.191392 (-0.418317) | 0.138149 / 0.680424 (-0.542274) | 0.014068 / 0.534201 (-0.520133) | 0.288161 / 0.579283 (-0.291122) | 0.272832 / 0.434364 (-0.161532) | 0.324476 / 0.540337 (-0.215862) | 0.419962 / 1.386936 (-0.966974) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005668 / 0.011353 (-0.005685) | 0.003637 / 0.011008 (-0.007371) | 0.049582 / 0.038508 (0.011074) | 0.030982 / 0.023109 (0.007872) | 0.273036 / 0.275898 (-0.002862) | 0.297562 / 0.323480 (-0.025918) | 0.004382 / 0.007986 (-0.003603) | 0.002763 / 0.004328 (-0.001566) | 0.050807 / 0.004250 (0.046556) | 0.046914 / 0.037052 (0.009862) | 0.287443 / 0.258489 (0.028954) | 0.319694 / 0.293841 (0.025853) | 0.051110 / 0.128546 (-0.077436) | 0.010650 / 0.075646 (-0.064997) | 0.058254 / 0.419271 (-0.361018) | 0.033419 / 0.043533 (-0.010114) | 0.275634 / 0.255139 (0.020495) | 0.288618 / 0.283200 (0.005419) | 0.018004 / 0.141683 (-0.123678) | 1.134166 / 1.452155 (-0.317989) | 1.192533 / 1.492716 (-0.300183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098573 / 0.018006 (0.080566) | 0.308152 / 0.000490 (0.307662) | 0.000249 / 0.000200 (0.000049) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022443 / 0.037411 (-0.014968) | 0.075628 / 0.014526 (0.061103) | 0.088807 / 0.176557 (-0.087750) | 0.127519 / 0.737135 (-0.609617) | 0.090156 / 0.296338 (-0.206182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294493 / 0.215209 (0.079284) | 2.862084 / 2.077655 (0.784429) | 1.585962 / 1.504120 (0.081842) | 1.466366 / 1.541195 (-0.074829) | 1.503306 / 1.468490 (0.034816) | 0.581524 / 4.584777 (-4.003253) | 2.475593 / 3.745712 (-1.270120) | 2.852014 / 5.269862 (-2.417847) | 1.834047 / 4.565676 (-2.731630) | 0.064009 / 0.424275 (-0.360266) | 0.005094 / 0.007607 (-0.002514) | 0.355960 / 0.226044 (0.129916) | 3.428849 / 2.268929 (1.159920) | 1.958501 / 55.444624 (-53.486124) | 1.675448 / 6.876477 (-5.201029) | 1.719960 / 2.142072 (-0.422113) | 0.659609 / 4.805227 (-4.145618) | 0.119036 / 6.500664 (-6.381628) | 0.041800 / 0.075469 (-0.033669) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025955 / 1.841788 (-0.815833) | 12.432417 / 8.074308 (4.358108) | 10.444854 / 10.191392 (0.253462) | 0.130106 / 0.680424 (-0.550318) | 0.015655 / 0.534201 (-0.518546) | 0.288184 / 0.579283 (-0.291099) | 0.285023 / 0.434364 (-0.149340) | 0.329244 / 0.540337 (-0.211093) | 0.415484 / 1.386936 (-0.971452) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-13T15:33:20Z
| 2024-02-15T15:20:06Z
| 2024-02-08T14:38:32Z
|
CONTRIBUTOR
| null | null | null |
Fixes #6466
The idea is to do a recursive check for structs. PyArrow handles it well enough.
For a demo you can do:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': 'abc@aol.com'}]})
print(concatenate_datasets([ds, ds2]).features)
print(concatenate_datasets([ds, ds2]).to_dict())
```
Now both the features and the rows are fixed.
I note that Sequence suffers from the same problem, so I can fix that in a future PR once this one is merged.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6587/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6587/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6587",
"merged_at": "2024-02-08T14:38:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6587"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6510
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6510/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6510/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6510/events
|
https://github.com/huggingface/datasets/pull/6510
| 2,046,928,742
|
PR_kwDODunzps5iRyiV
| 6,510
|
Replace `list_files_info` with `list_repo_tree` in `push_to_hub`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI errors are unrelated to the changes, so I'm merging.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005161 / 0.011353 (-0.006192) | 0.003494 / 0.011008 (-0.007515) | 0.062601 / 0.038508 (0.024093) | 0.052876 / 0.023109 (0.029767) | 0.255595 / 0.275898 (-0.020303) | 0.283108 / 0.323480 (-0.040371) | 0.003856 / 0.007986 (-0.004130) | 0.002686 / 0.004328 (-0.001642) | 0.048604 / 0.004250 (0.044353) | 0.037886 / 0.037052 (0.000834) | 0.252902 / 0.258489 (-0.005587) | 0.286906 / 0.293841 (-0.006935) | 0.028570 / 0.128546 (-0.099976) | 0.010684 / 0.075646 (-0.064962) | 0.208154 / 0.419271 (-0.211118) | 0.036169 / 0.043533 (-0.007364) | 0.276026 / 0.255139 (0.020887) | 0.272274 / 0.283200 (-0.010925) | 0.017690 / 0.141683 (-0.123993) | 1.202400 / 1.452155 (-0.249755) | 1.231223 / 1.492716 (-0.261494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095229 / 0.018006 (0.077222) | 0.302205 / 0.000490 (0.301716) | 0.000226 / 0.000200 (0.000026) | 0.000045 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018877 / 0.037411 (-0.018534) | 0.062286 / 0.014526 (0.047760) | 0.075191 / 0.176557 (-0.101366) | 0.121419 / 0.737135 (-0.615716) | 0.075641 / 0.296338 (-0.220697) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282914 / 0.215209 (0.067705) | 2.769156 / 2.077655 (0.691501) | 1.480219 / 1.504120 (-0.023901) | 1.355742 / 1.541195 (-0.185453) | 1.399740 / 1.468490 (-0.068750) | 0.556365 / 4.584777 (-4.028412) | 2.399679 / 3.745712 (-1.346033) | 2.850510 / 5.269862 (-2.419351) | 1.781428 / 4.565676 (-2.784249) | 0.063045 / 0.424275 (-0.361230) | 0.004931 / 0.007607 (-0.002676) | 0.343743 / 0.226044 (0.117698) | 3.374907 / 2.268929 (1.105978) | 1.857774 / 55.444624 (-53.586851) | 1.577154 / 6.876477 (-5.299323) | 1.626597 / 2.142072 (-0.515475) | 0.653991 / 4.805227 (-4.151236) | 0.121306 / 6.500664 (-6.379358) | 0.042131 / 0.075469 (-0.033339) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948826 / 1.841788 (-0.892962) | 11.922497 / 8.074308 (3.848188) | 10.592334 / 10.191392 (0.400942) | 0.129145 / 0.680424 (-0.551279) | 0.014652 / 0.534201 (-0.519549) | 0.286074 / 0.579283 (-0.293210) | 0.265338 / 0.434364 (-0.169026) | 0.346872 / 0.540337 (-0.193466) | 0.450480 / 1.386936 (-0.936456) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005305 / 0.011353 (-0.006048) | 0.003583 / 0.011008 (-0.007426) | 0.049855 / 0.038508 (0.011347) | 0.052882 / 0.023109 (0.029773) | 0.268429 / 0.275898 (-0.007469) | 0.293375 / 0.323480 (-0.030105) | 0.004052 / 0.007986 (-0.003934) | 0.002685 / 0.004328 (-0.001644) | 0.049206 / 0.004250 (0.044955) | 0.040187 / 0.037052 (0.003135) | 0.270112 / 0.258489 (0.011623) | 0.306380 / 0.293841 (0.012539) | 0.029161 / 0.128546 (-0.099386) | 0.010948 / 0.075646 (-0.064698) | 0.057721 / 0.419271 (-0.361550) | 0.032628 / 0.043533 (-0.010905) | 0.267458 / 0.255139 (0.012319) | 0.291905 / 0.283200 (0.008705) | 0.018096 / 0.141683 (-0.123587) | 1.112744 / 1.452155 (-0.339410) | 1.161962 / 1.492716 (-0.330754) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097449 / 0.018006 (0.079443) | 0.304270 / 0.000490 (0.303780) | 0.000235 / 0.000200 (0.000035) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023550 / 0.037411 (-0.013861) | 0.078246 / 0.014526 (0.063720) | 0.091229 / 0.176557 (-0.085327) | 0.130624 / 0.737135 (-0.606511) | 0.092767 / 0.296338 (-0.203571) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284962 / 0.215209 (0.069753) | 2.761090 / 2.077655 (0.683435) | 1.545409 / 1.504120 (0.041289) | 1.424573 / 1.541195 (-0.116622) | 1.438869 / 1.468490 (-0.029621) | 0.571281 / 4.584777 (-4.013496) | 2.419493 / 3.745712 (-1.326219) | 2.802611 / 5.269862 (-2.467251) | 1.749880 / 4.565676 (-2.815796) | 0.062566 / 0.424275 (-0.361709) | 0.005243 / 0.007607 (-0.002364) | 0.344653 / 0.226044 (0.118608) | 3.367488 / 2.268929 (1.098559) | 1.925871 / 55.444624 (-53.518754) | 1.624258 / 6.876477 (-5.252219) | 1.663742 / 2.142072 (-0.478330) | 0.634553 / 4.805227 (-4.170675) | 0.116745 / 6.500664 (-6.383919) | 0.041734 / 0.075469 (-0.033735) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.006808 / 1.841788 (-0.834980) | 12.499711 / 8.074308 (4.425403) | 10.956260 / 10.191392 (0.764868) | 0.132393 / 0.680424 (-0.548031) | 0.015924 / 0.534201 (-0.518277) | 0.289837 / 0.579283 (-0.289446) | 0.281565 / 0.434364 (-0.152799) | 0.337393 / 0.540337 (-0.202945) | 0.560385 / 1.386936 (-0.826551) |\n\n</details>\n</details>\n\n\n"
] | 2023-12-18T15:34:19Z
| 2023-12-19T18:05:47Z
| 2023-12-19T17:58:34Z
|
COLLABORATOR
| null | null | null |
Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6510/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6510/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6510.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6510",
"merged_at": "2023-12-19T17:58:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6510.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6510"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5710
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5710/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5710/events
|
https://github.com/huggingface/datasets/issues/5710
| 1,655,703,534
|
I_kwDODunzps5isAfu
| 5,710
|
OSError: Memory mapping file failed: Cannot allocate memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4",
"events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}",
"followers_url": "https://api.github.com/users/Saibo-creator/followers",
"following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Saibo-creator",
"id": 53392976,
"login": "Saibo-creator",
"node_id": "MDQ6VXNlcjUzMzkyOTc2",
"organizations_url": "https://api.github.com/users/Saibo-creator/orgs",
"received_events_url": "https://api.github.com/users/Saibo-creator/received_events",
"repos_url": "https://api.github.com/users/Saibo-creator/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Saibo-creator",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they are more experienced on this matter). Also, googling \"mmap cannot allocate memory\" returns some approaches to solving this problem."
] | 2023-04-05T14:11:26Z
| 2023-04-20T17:16:40Z
| 2023-04-20T17:16:40Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```terminal
0_21/cache-e9c42499f65b1881.arrow
load_hf_datasets_from_disk: 82%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 494/600 [07:26<01:35, 1.11it/s]
Traceback (most recent call last):
File "example_load_genkalm_dataset.py", line 35, in <module>
multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay)
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process
genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length,
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset
hf_ds = load_from_disk(path_or_name)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk
arrow_table = concat_tables(
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables
tables = list(tables)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr>
table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix())
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file
memory_mapped_stream = pa.memory_map(filename)
File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
### Steps to reproduce the bug
Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share.
### Expected behavior
I expect the 3TB of data can be fully mapped to memory
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyArrow version: 11.0.0
- Pandas version: 1.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5710/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6242
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6242/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6242/events
|
https://github.com/huggingface/datasets/issues/6242
| 1,896,899,123
|
I_kwDODunzps5xEGIz
| 6,242
|
Data alteration when loading dataset with unspecified inner sequence length
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"While this issue may seem specific, it led to a silent problem in my workflow that took days to diagnose. If this feature is not intended to be supported, an error should be raised when encountering this configuration to prevent such issues.",
"Thanks for reporting! This is a MRE:\r\n\r\n```python\r\nimport pyarrow as pa\r\nfrom datasets.table import cast_array_to_feature\r\nfrom datasets import Sequence, Value\r\ndata = [\r\n [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],\r\n [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],\r\n]\r\narr = pa.array(data, pa.list_(pa.list_(pa.float32(), 3)))\r\ncast_array_to_feature(arr, Sequence(Sequence(Value(\"float32\"))))\r\n```\r\n\r\nI've opened a PR with a fix."
] | 2023-09-14T16:12:45Z
| 2023-09-19T17:53:18Z
| 2023-09-19T17:53:18Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent.
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Value, Sequence, load_dataset
# Repository ID
repo_id = "my_repo_id"
# Define features with a specific length of 3 for each inner sequence
specified_features = Features({"key": Sequence(Sequence(Value("float32"), length=3))})
# Create a dataset with the specified features
data = [
[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]],
[[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]],
]
dataset = Dataset.from_dict({"key": data}, features=specified_features)
# Push the dataset to the hub
dataset.push_to_hub(repo_id)
# Define features without specifying the length
unspecified_features = Features({"key": Sequence(Sequence(Value("float32")))})
# Load the dataset from the hub with this new feature definition
dataset = load_dataset(f"qgallouedec/{repo_id}", split="train", features=unspecified_features)
# The obtained data is altered
print(dataset.to_dict()) # {'key': [[[1.0], [2.0]], [[3.0], [4.0]]]}
```
### Expected behavior
```python
print(dataset.to_dict()) # {'key': [[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]], [[7.0, 8.0, 9.0], [10.0, 11.0, 12.0]]]}
```
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-6.2.0-32-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6242/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4985/events
|
https://github.com/huggingface/datasets/pull/4985
| 1,375,807,768
|
PR_kwDODunzps4_F6kU
| 4,985
|
Prefer split patterns from directories over split patterns from filenames
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Can we merge this one since the issue this PR fixes was reported for the second time? I also think we don't need a test for this simple change.",
"@mariosasko sure! could you please approve it? ",
"Hi there @polinaeterna @mariosasko! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!"
] | 2022-09-16T11:20:40Z
| 2022-11-02T11:54:28Z
| 2022-09-29T08:07:49Z
|
CONTRIBUTOR
| null | null | null |
related to https://github.com/huggingface/datasets/issues/4895
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4985/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4985/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4985.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4985",
"merged_at": "2022-09-29T08:07:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4985.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4985"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7002
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7002/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7002/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7002/events
|
https://github.com/huggingface/datasets/pull/7002
| 2,373,010,351
|
PR_kwDODunzps5zhBld
| 7,002
|
Fix dump of bfloat16 torch tensor
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7002). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005321 / 0.011353 (-0.006032) | 0.003495 / 0.011008 (-0.007514) | 0.065577 / 0.038508 (0.027069) | 0.030876 / 0.023109 (0.007767) | 0.255216 / 0.275898 (-0.020682) | 0.265111 / 0.323480 (-0.058368) | 0.003149 / 0.007986 (-0.004837) | 0.004062 / 0.004328 (-0.000267) | 0.051142 / 0.004250 (0.046891) | 0.042460 / 0.037052 (0.005408) | 0.270692 / 0.258489 (0.012203) | 0.284957 / 0.293841 (-0.008884) | 0.030143 / 0.128546 (-0.098403) | 0.012148 / 0.075646 (-0.063498) | 0.203706 / 0.419271 (-0.215565) | 0.035948 / 0.043533 (-0.007584) | 0.251391 / 0.255139 (-0.003748) | 0.270908 / 0.283200 (-0.012292) | 0.018496 / 0.141683 (-0.123187) | 1.118567 / 1.452155 (-0.333587) | 1.157695 / 1.492716 (-0.335021) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.135649 / 0.018006 (0.117643) | 0.281489 / 0.000490 (0.281000) | 0.000244 / 0.000200 (0.000044) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018700 / 0.037411 (-0.018711) | 0.062305 / 0.014526 (0.047779) | 0.074968 / 0.176557 (-0.101589) | 0.121490 / 0.737135 (-0.615645) | 0.075585 / 0.296338 (-0.220754) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276929 / 0.215209 (0.061720) | 2.733543 / 2.077655 (0.655888) | 1.414585 / 1.504120 (-0.089535) | 1.301975 / 1.541195 (-0.239220) | 1.336698 / 1.468490 (-0.131792) | 0.720650 / 4.584777 (-3.864127) | 2.374796 / 3.745712 (-1.370917) | 2.866534 / 5.269862 (-2.403327) | 1.819607 / 4.565676 (-2.746069) | 0.077914 / 0.424275 (-0.346361) | 0.005146 / 0.007607 (-0.002461) | 0.331722 / 0.226044 (0.105678) | 3.290875 / 2.268929 (1.021946) | 1.799806 / 55.444624 (-53.644818) | 1.476816 / 6.876477 (-5.399660) | 1.511441 / 2.142072 (-0.630631) | 0.798043 / 4.805227 (-4.007185) | 0.134577 / 6.500664 (-6.366087) | 0.042055 / 0.075469 (-0.033415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.967908 / 1.841788 (-0.873880) | 11.215688 / 8.074308 (3.141380) | 9.486403 / 10.191392 (-0.704989) | 0.141864 / 0.680424 (-0.538560) | 0.013462 / 0.534201 (-0.520739) | 0.302601 / 0.579283 (-0.276682) | 0.266870 / 0.434364 (-0.167494) | 0.336963 / 0.540337 (-0.203375) | 0.425374 / 1.386936 (-0.961562) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005549 / 0.011353 (-0.005803) | 0.003464 / 0.011008 (-0.007544) | 0.051421 / 0.038508 (0.012913) | 0.032320 / 0.023109 (0.009211) | 0.269591 / 0.275898 (-0.006307) | 0.292015 / 0.323480 (-0.031465) | 0.004351 / 0.007986 (-0.003634) | 0.002772 / 0.004328 (-0.001556) | 0.048836 / 0.004250 (0.044586) | 0.039501 / 0.037052 (0.002449) | 0.282419 / 0.258489 (0.023930) | 0.312289 / 0.293841 (0.018448) | 0.031788 / 0.128546 (-0.096759) | 0.012074 / 0.075646 (-0.063572) | 0.060457 / 0.419271 (-0.358814) | 0.033106 / 0.043533 (-0.010427) | 0.270323 / 0.255139 (0.015184) | 0.287855 / 0.283200 (0.004655) | 0.017865 / 0.141683 (-0.123818) | 1.130406 / 1.452155 (-0.321749) | 1.178679 / 1.492716 (-0.314038) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093606 / 0.018006 (0.075600) | 0.297328 / 0.000490 (0.296838) | 0.000211 / 0.000200 (0.000011) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022498 / 0.037411 (-0.014913) | 0.076927 / 0.014526 (0.062401) | 0.088013 / 0.176557 (-0.088544) | 0.127279 / 0.737135 (-0.609857) | 0.089424 / 0.296338 (-0.206914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296441 / 0.215209 (0.081232) | 2.913051 / 2.077655 (0.835396) | 1.581816 / 1.504120 (0.077696) | 1.451575 / 1.541195 (-0.089620) | 1.458968 / 1.468490 (-0.009522) | 0.727191 / 4.584777 (-3.857586) | 0.954607 / 3.745712 (-2.791106) | 2.824357 / 5.269862 (-2.445505) | 1.886779 / 4.565676 (-2.678898) | 0.079397 / 0.424275 (-0.344878) | 0.005566 / 0.007607 (-0.002041) | 0.351655 / 0.226044 (0.125611) | 3.395790 / 2.268929 (1.126862) | 1.886238 / 55.444624 (-53.558387) | 1.615413 / 6.876477 (-5.261064) | 1.723922 / 2.142072 (-0.418150) | 0.807858 / 4.805227 (-3.997369) | 0.132998 / 6.500664 (-6.367667) | 0.040396 / 0.075469 (-0.035073) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.008527 / 1.841788 (-0.833261) | 11.736104 / 8.074308 (3.661796) | 10.283367 / 10.191392 (0.091975) | 0.141386 / 0.680424 (-0.539038) | 0.015722 / 0.534201 (-0.518479) | 0.301785 / 0.579283 (-0.277498) | 0.123073 / 0.434364 (-0.311291) | 0.340478 / 0.540337 (-0.199859) | 0.462936 / 1.386936 (-0.924000) |\n\n</details>\n</details>\n\n\n"
] | 2024-06-25T15:38:09Z
| 2024-06-25T16:10:16Z
| 2024-06-25T15:51:52Z
|
MEMBER
| null | null | null |
close https://github.com/huggingface/datasets/issues/7000
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7002/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7002/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7002",
"merged_at": "2024-06-25T15:51:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7002"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4571
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4571/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4571/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4571/events
|
https://github.com/huggingface/datasets/issues/4571
| 1,284,883,289
|
I_kwDODunzps5MlcNZ
| 4,571
|
move under the facebook org?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ",
"I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?",
"fwiw: the dataset viewer is working. Renaming the issue"
] | 2022-06-26T11:19:09Z
| 2023-09-25T12:05:18Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/gsarti/flores_101
### Description
It seems like streaming isn't supported for this dataset:
```
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
### Owner
No
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4571/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4571/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5485
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5485/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5485/events
|
https://github.com/huggingface/datasets/pull/5485
| 1,563,002,829
|
PR_kwDODunzps5I2ER2
| 5,485
|
Add section in tutorial for IterableDataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008492 / 0.011353 (-0.002861) | 0.004717 / 0.011008 (-0.006292) | 0.101111 / 0.038508 (0.062602) | 0.029129 / 0.023109 (0.006019) | 0.307564 / 0.275898 (0.031666) | 0.367038 / 0.323480 (0.043558) | 0.007105 / 0.007986 (-0.000881) | 0.003622 / 0.004328 (-0.000706) | 0.078370 / 0.004250 (0.074120) | 0.036960 / 0.037052 (-0.000093) | 0.315612 / 0.258489 (0.057123) | 0.353601 / 0.293841 (0.059760) | 0.032900 / 0.128546 (-0.095647) | 0.011405 / 0.075646 (-0.064241) | 0.322331 / 0.419271 (-0.096940) | 0.040823 / 0.043533 (-0.002710) | 0.306734 / 0.255139 (0.051595) | 0.328155 / 0.283200 (0.044955) | 0.087169 / 0.141683 (-0.054514) | 1.460543 / 1.452155 (0.008389) | 1.498094 / 1.492716 (0.005378) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011863 / 0.018006 (-0.006143) | 0.416315 / 0.000490 (0.415826) | 0.003463 / 0.000200 (0.003263) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023219 / 0.037411 (-0.014192) | 0.096469 / 0.014526 (0.081943) | 0.105960 / 0.176557 (-0.070596) | 0.148993 / 0.737135 (-0.588142) | 0.108112 / 0.296338 (-0.188226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415662 / 0.215209 (0.200453) | 4.155111 / 2.077655 (2.077456) | 1.834943 / 1.504120 (0.330823) | 1.622752 / 1.541195 (0.081557) | 1.701630 / 1.468490 (0.233140) | 0.690596 / 4.584777 (-3.894181) | 3.399385 / 3.745712 (-0.346327) | 3.140521 / 5.269862 (-2.129341) | 1.609152 / 4.565676 (-2.956524) | 0.082132 / 0.424275 (-0.342143) | 0.012343 / 0.007607 (0.004735) | 0.532715 / 0.226044 (0.306670) | 5.323032 / 2.268929 (3.054104) | 2.326625 / 55.444624 (-53.118000) | 1.944263 / 6.876477 (-4.932213) | 1.994015 / 2.142072 (-0.148058) | 0.813805 / 4.805227 (-3.991422) | 0.149233 / 6.500664 (-6.351431) | 0.065318 / 0.075469 (-0.010151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212441 / 1.841788 (-0.629347) | 13.979069 / 8.074308 (5.904761) | 14.003998 / 10.191392 (3.812606) | 0.146956 / 0.680424 (-0.533468) | 0.028564 / 0.534201 (-0.505637) | 0.392370 / 0.579283 (-0.186913) | 0.399695 / 0.434364 (-0.034669) | 0.473481 / 0.540337 (-0.066856) | 0.562625 / 1.386936 (-0.824311) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006821 / 0.011353 (-0.004532) | 0.004570 / 0.011008 (-0.006438) | 0.076217 / 0.038508 (0.037709) | 0.028888 / 0.023109 (0.005779) | 0.345431 / 0.275898 (0.069533) | 0.389246 / 0.323480 (0.065766) | 0.005939 / 0.007986 (-0.002046) | 0.003356 / 0.004328 (-0.000973) | 0.075880 / 0.004250 (0.071629) | 0.041427 / 0.037052 (0.004374) | 0.344481 / 0.258489 (0.085992) | 0.398508 / 0.293841 (0.104667) | 0.031801 / 0.128546 (-0.096745) | 0.011763 / 0.075646 (-0.063884) | 0.085600 / 0.419271 (-0.333672) | 0.042656 / 0.043533 (-0.000876) | 0.345893 / 0.255139 (0.090754) | 0.376910 / 0.283200 (0.093711) | 0.092451 / 0.141683 (-0.049232) | 1.461222 / 1.452155 (0.009068) | 1.555822 / 1.492716 (0.063106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235781 / 0.018006 (0.217774) | 0.418485 / 0.000490 (0.417995) | 0.005560 / 0.000200 (0.005360) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025410 / 0.037411 (-0.012001) | 0.103780 / 0.014526 (0.089254) | 0.110183 / 0.176557 (-0.066374) | 0.151097 / 0.737135 (-0.586039) | 0.112539 / 0.296338 (-0.183799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436686 / 0.215209 (0.221477) | 4.341594 / 2.077655 (2.263940) | 2.062309 / 1.504120 (0.558190) | 1.857461 / 1.541195 (0.316267) | 1.947204 / 1.468490 (0.478713) | 0.699641 / 4.584777 (-3.885136) | 3.406983 / 3.745712 (-0.338729) | 3.294705 / 5.269862 (-1.975157) | 1.360582 / 4.565676 (-3.205095) | 0.083025 / 0.424275 (-0.341250) | 0.012461 / 0.007607 (0.004854) | 0.537767 / 0.226044 (0.311722) | 5.393316 / 2.268929 (3.124387) | 2.516692 / 55.444624 (-52.927932) | 2.163987 / 6.876477 (-4.712490) | 2.220480 / 2.142072 (0.078408) | 0.810648 / 4.805227 (-3.994579) | 0.151820 / 6.500664 (-6.348844) | 0.068080 / 0.075469 (-0.007389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279382 / 1.841788 (-0.562405) | 13.989947 / 8.074308 (5.915638) | 14.039229 / 10.191392 (3.847836) | 0.141071 / 0.680424 (-0.539352) | 0.017118 / 0.534201 (-0.517083) | 0.381558 / 0.579283 (-0.197725) | 0.390407 / 0.434364 (-0.043957) | 0.440920 / 0.540337 (-0.099418) | 0.525478 / 1.386936 (-0.861458) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-30T18:43:04Z
| 2023-02-01T18:15:38Z
| 2023-02-01T18:08:46Z
|
MEMBER
| null | null | null |
Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new doc introduced in:
- #5410
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5485/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"merged_at": "2023-02-01T18:08:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5485"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7046
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7046/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7046/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7046/events
|
https://github.com/huggingface/datasets/pull/7046
| 2,405,485,582
|
PR_kwDODunzps51N05n
| 7,046
|
Support librosa and numpy 2.0 for Python 3.10
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005897 / 0.011353 (-0.005456) | 0.003958 / 0.011008 (-0.007050) | 0.063684 / 0.038508 (0.025176) | 0.031743 / 0.023109 (0.008634) | 0.246725 / 0.275898 (-0.029173) | 0.275519 / 0.323480 (-0.047961) | 0.003347 / 0.007986 (-0.004639) | 0.004089 / 0.004328 (-0.000240) | 0.049591 / 0.004250 (0.045341) | 0.049386 / 0.037052 (0.012333) | 0.264929 / 0.258489 (0.006440) | 0.317157 / 0.293841 (0.023316) | 0.029929 / 0.128546 (-0.098617) | 0.012264 / 0.075646 (-0.063382) | 0.209208 / 0.419271 (-0.210064) | 0.037073 / 0.043533 (-0.006460) | 0.247999 / 0.255139 (-0.007140) | 0.273457 / 0.283200 (-0.009742) | 0.020354 / 0.141683 (-0.121328) | 1.109874 / 1.452155 (-0.342281) | 1.180085 / 1.492716 (-0.312631) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099935 / 0.018006 (0.081929) | 0.305607 / 0.000490 (0.305118) | 0.000214 / 0.000200 (0.000014) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.020019 / 0.037411 (-0.017392) | 0.066608 / 0.014526 (0.052083) | 0.079354 / 0.176557 (-0.097202) | 0.123416 / 0.737135 (-0.613719) | 0.078171 / 0.296338 (-0.218167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281627 / 0.215209 (0.066418) | 2.809807 / 2.077655 (0.732152) | 1.467007 / 1.504120 (-0.037112) | 1.351367 / 1.541195 (-0.189828) | 1.396782 / 1.468490 (-0.071708) | 0.735605 / 4.584777 (-3.849172) | 2.378455 / 3.745712 (-1.367257) | 2.971739 / 5.269862 (-2.298122) | 2.004970 / 4.565676 (-2.560707) | 0.078156 / 0.424275 (-0.346119) | 0.005276 / 0.007607 (-0.002331) | 0.340370 / 0.226044 (0.114325) | 3.347552 / 2.268929 (1.078624) | 1.851098 / 55.444624 (-53.593527) | 1.518079 / 6.876477 (-5.358398) | 1.703145 / 2.142072 (-0.438927) | 0.799574 / 4.805227 (-4.005654) | 0.133591 / 6.500664 (-6.367074) | 0.043329 / 0.075469 (-0.032141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977268 / 1.841788 (-0.864520) | 12.720209 / 8.074308 (4.645901) | 9.798126 / 10.191392 (-0.393266) | 0.132106 / 0.680424 (-0.548318) | 0.014456 / 0.534201 (-0.519745) | 0.312965 / 0.579283 (-0.266318) | 0.271348 / 0.434364 (-0.163016) | 0.343951 / 0.540337 (-0.196386) | 0.449814 / 1.386936 (-0.937122) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005944 / 0.011353 (-0.005409) | 0.004054 / 0.011008 (-0.006954) | 0.050573 / 0.038508 (0.012065) | 0.034580 / 0.023109 (0.011470) | 0.261439 / 0.275898 (-0.014459) | 0.286057 / 0.323480 (-0.037423) | 0.004463 / 0.007986 (-0.003523) | 0.002891 / 0.004328 (-0.001437) | 0.049169 / 0.004250 (0.044919) | 0.041622 / 0.037052 (0.004570) | 0.275216 / 0.258489 (0.016727) | 0.305847 / 0.293841 (0.012006) | 0.032615 / 0.128546 (-0.095932) | 0.012304 / 0.075646 (-0.063343) | 0.062890 / 0.419271 (-0.356382) | 0.033846 / 0.043533 (-0.009687) | 0.262758 / 0.255139 (0.007619) | 0.279451 / 0.283200 (-0.003748) | 0.018953 / 0.141683 (-0.122730) | 1.149158 / 1.452155 (-0.302997) | 1.173981 / 1.492716 (-0.318735) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.100462 / 0.018006 (0.082456) | 0.308390 / 0.000490 (0.307900) | 0.000207 / 0.000200 (0.000007) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023089 / 0.037411 (-0.014322) | 0.078610 / 0.014526 (0.064084) | 0.090348 / 0.176557 (-0.086208) | 0.130784 / 0.737135 (-0.606351) | 0.092538 / 0.296338 (-0.203801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296255 / 0.215209 (0.081046) | 2.899159 / 2.077655 (0.821504) | 1.603524 / 1.504120 (0.099404) | 1.418002 / 1.541195 (-0.123192) | 1.470221 / 1.468490 (0.001731) | 0.722129 / 4.584777 (-3.862648) | 0.956146 / 3.745712 (-2.789566) | 3.011640 / 5.269862 (-2.258222) | 1.910966 / 4.565676 (-2.654711) | 0.078771 / 0.424275 (-0.345504) | 0.005154 / 0.007607 (-0.002453) | 0.354001 / 0.226044 (0.127956) | 3.484224 / 2.268929 (1.215296) | 1.913612 / 55.444624 (-53.531012) | 1.634492 / 6.876477 (-5.241985) | 1.693292 / 2.142072 (-0.448780) | 0.816837 / 4.805227 (-3.988390) | 0.136631 / 6.500664 (-6.364033) | 0.042291 / 0.075469 (-0.033178) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.994887 / 1.841788 (-0.846901) | 13.144865 / 8.074308 (5.070557) | 10.820098 / 10.191392 (0.628706) | 0.132557 / 0.680424 (-0.547867) | 0.015467 / 0.534201 (-0.518734) | 0.302026 / 0.579283 (-0.277257) | 0.128763 / 0.434364 (-0.305601) | 0.347908 / 0.540337 (-0.192430) | 0.444829 / 1.386936 (-0.942107) |\n\n</details>\n</details>\n\n\n"
] | 2024-07-12T12:42:47Z
| 2024-07-12T13:04:40Z
| 2024-07-12T12:58:17Z
|
MEMBER
| null | null | null |
Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release:
- https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1
- https://github.com/dofuuz/python-soxr/issues/28
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7046/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7046/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7046.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7046",
"merged_at": "2024-07-12T12:58:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7046.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7046"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7467
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7467/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7467/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7467/events
|
https://github.com/huggingface/datasets/issues/7467
| 2,930,067,107
|
I_kwDODunzps6upUaj
| 7,467
|
load_dataset with streaming hangs on parquet datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10550252?v=4",
"events_url": "https://api.github.com/users/The0nix/events{/privacy}",
"followers_url": "https://api.github.com/users/The0nix/followers",
"following_url": "https://api.github.com/users/The0nix/following{/other_user}",
"gists_url": "https://api.github.com/users/The0nix/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The0nix",
"id": 10550252,
"login": "The0nix",
"node_id": "MDQ6VXNlcjEwNTUwMjUy",
"organizations_url": "https://api.github.com/users/The0nix/orgs",
"received_events_url": "https://api.github.com/users/The0nix/received_events",
"repos_url": "https://api.github.com/users/The0nix/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The0nix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The0nix/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The0nix",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! The issue comes from `pyarrow`, I reported it here: https://github.com/apache/arrow/issues/45214 (feel free to comment / thumb up).\n\nAlternatively we can try to find something else than `ParquetFileFragment.to_batches()` to iterate on Parquet data and keep the option the pass `filters=`..."
] | 2025-03-18T23:33:54Z
| 2025-03-25T10:28:04Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming=True, split="train")
it = iter(dataset)
next(it)
print('Finish')
```
The program prints finish but doesn't exit and hangs indefinitely.
I tried this on two different machines and several datasets.
### Expected behavior
The program exits successfully
### Environment info
datasets==3.4.1
Python 3.12.9.
MacOS and Ubuntu Linux
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7467/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7467/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5624
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5624/events
|
https://github.com/huggingface/datasets/issues/5624
| 1,617,400,192
|
I_kwDODunzps5gZ5GA
| 5,624
|
glue datasets returning -1 for test split
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4",
"events_url": "https://api.github.com/users/lithafnium/events{/privacy}",
"followers_url": "https://api.github.com/users/lithafnium/followers",
"following_url": "https://api.github.com/users/lithafnium/following{/other_user}",
"gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lithafnium",
"id": 8939967,
"login": "lithafnium",
"node_id": "MDQ6VXNlcjg5Mzk5Njc=",
"organizations_url": "https://api.github.com/users/lithafnium/orgs",
"received_events_url": "https://api.github.com/users/lithafnium/received_events",
"repos_url": "https://api.github.com/users/lithafnium/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lithafnium",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answered: https://huggingface.co/datasets/glue/discussions/5#63907885937867f0cb3cde31\r\n> The test labels are not public.\r\n>\r\n> Note this dataset belongs to a benchmark: people send their predictions for the test split to GLUE (https://gluebenchmark.com/) and then they get a score in their leaderboard...\r\n"
] | 2023-03-09T14:47:18Z
| 2023-03-09T16:49:29Z
| 2023-03-09T16:49:29Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
print(d["label"]
```
### Expected behavior
Expected behavior should be 0/1 instead of -1.
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4961
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4961/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4961/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4961/events
|
https://github.com/huggingface/datasets/issues/4961
| 1,368,124,033
|
I_kwDODunzps5Ri-qB
| 4,961
|
fsspec 2022.8.2 breaks xopen in streaming mode
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3616964?v=4",
"events_url": "https://api.github.com/users/DCNemesis/events{/privacy}",
"followers_url": "https://api.github.com/users/DCNemesis/followers",
"following_url": "https://api.github.com/users/DCNemesis/following{/other_user}",
"gists_url": "https://api.github.com/users/DCNemesis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DCNemesis",
"id": 3616964,
"login": "DCNemesis",
"node_id": "MDQ6VXNlcjM2MTY5NjQ=",
"organizations_url": "https://api.github.com/users/DCNemesis/orgs",
"received_events_url": "https://api.github.com/users/DCNemesis/received_events",
"repos_url": "https://api.github.com/users/DCNemesis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DCNemesis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DCNemesis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DCNemesis",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.",
"Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.",
"Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` releases 2022.8.0 and 2022.8.1. But they fixed it in their patch release 2022.8.2 (and yanked both previous versions). See:\r\n- https://github.com/huggingface/transformers/pull/18846\r\n\r\nAre you sure you have version 2022.8.2 installed?\r\n```shell\r\npip install -U fsspec\r\n```\r\n",
"@albertvillanova I was using a temporary Google Colab instance, but checking it again today it seems it was loading 2022.8.1 rather than 2022.8.2. It's surprising that colab is using the version that was replaced the same day it was released. Testing with 2022.8.2 did work. It appears Colab [will be fixing it](https://github.com/googlecolab/colabtools/issues/3055) on their end too. ",
"Thanks for the additional information.\r\n\r\nOnce we know 2022.8.2 works, I'm closing this issue. Feel free to reopen it if necessary.",
"Colab just upgraded their default `fsspec` version to 2022.8.2:\r\n- https://github.com/googlecolab/colabtools/issues/3055#issuecomment-1244019010"
] | 2022-09-09T17:26:55Z
| 2022-09-12T17:45:50Z
| 2022-09-12T14:32:05Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.
## Steps to reproduce the bug
```python
import datasets
data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True)
```
## Expected results
Dataset should load as iterator.
## Actual results
```
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1737 # Return iterable dataset in case of streaming
1738 if streaming:
-> 1739 return builder_instance.as_streaming_dataset(split=split)
1740
1741 # Some datasets are already processed on the HF google storage
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1023 )
1024 self._check_manual_download(dl_manager)
-> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
1026 # By default, return all splits
1027 if split is None:
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0)
182 name=datasets.Split.TRAIN,
183 gen_kwargs={
--> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages],
185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in
186 self.config.languages] if not dl_manager.is_streaming else None,
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split)
267 # for streaming case
268 def _download_audio_archives(dl_manager, lang, format, split):
--> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split)
270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths]
[~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split)
251 n_files_path = dl_manager.download(n_files_url)
252
--> 253 with open(n_files_path, "r", encoding="utf-8") as file:
254 n_files = int(file.read().strip()) # the file contains a number of archives
255
ValueError: I/O operation on closed file.
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4961/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4961/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7521
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7521/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7521/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7521/events
|
https://github.com/huggingface/datasets/pull/7521
| 2,997,666,366
|
PR_kwDODunzps6SvEZp
| 7,521
|
fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"events_url": "https://api.github.com/users/giraffacarp/events{/privacy}",
"followers_url": "https://api.github.com/users/giraffacarp/followers",
"following_url": "https://api.github.com/users/giraffacarp/following{/other_user}",
"gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/giraffacarp",
"id": 73196164,
"login": "giraffacarp",
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"organizations_url": "https://api.github.com/users/giraffacarp/orgs",
"received_events_url": "https://api.github.com/users/giraffacarp/received_events",
"repos_url": "https://api.github.com/users/giraffacarp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/giraffacarp",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"@lhoestq let me know if you prefer to change the spark iterator so it outputs `bytes`"
] | 2025-04-15T21:23:58Z
| 2025-04-16T06:57:22Z
| null |
NONE
| null | null | null |
## Task
Support bytes-like objects (bytes and bytearray) in Features classes
### Description
The `Features` classes only accept `bytes` objects for binary data, but not `bytearray`. This leads to errors when using `IterableDataset.from_spark()` with Spark DataFrames as they contain `bytearray` objects, even though both `bytes` and `bytearray` are valid [*bytes-like objects* in Python](https://docs.python.org/3/glossary.html#term-bytes-like-object).
### Changes
- Updated `Features` classes to accept both `bytes` and `bytearray` types for binary data fields.
### Reasoning
- `bytes` and `bytearray` serve the same purpose for binary data, with the only difference being mutability.
- Modifying the Spark iterator to convert `bytearray` to `bytes` would be a workaround, not a true fix. I think the correct solution is to accept all bytes-like objects as input.
- This approach is more robust and future-proof since Python 3.12+ provides a [standard way to check for buffer protocol](https://docs.python.org/3/c-api/buffer.html#bufferobjects).
### Testing
- Added tests to cover `bytearray` inputs for image features.
### Related Issues
- Fixes: #7517
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7521/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7521/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7521",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7521"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5721/events
|
https://github.com/huggingface/datasets/issues/5721
| 1,659,680,682
|
I_kwDODunzps5i7Leq
| 5,721
|
Calling datasets.load_dataset("text" ...) results in a wrong split.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1841186?v=4",
"events_url": "https://api.github.com/users/cyrilzakka/events{/privacy}",
"followers_url": "https://api.github.com/users/cyrilzakka/followers",
"following_url": "https://api.github.com/users/cyrilzakka/following{/other_user}",
"gists_url": "https://api.github.com/users/cyrilzakka/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyrilzakka",
"id": 1841186,
"login": "cyrilzakka",
"node_id": "MDQ6VXNlcjE4NDExODY=",
"organizations_url": "https://api.github.com/users/cyrilzakka/orgs",
"received_events_url": "https://api.github.com/users/cyrilzakka/received_events",
"repos_url": "https://api.github.com/users/cyrilzakka/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyrilzakka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyrilzakka/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyrilzakka",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-04-08T23:55:12Z
| 2023-04-08T23:55:12Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does.
### Steps to reproduce the bug
I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL
```
folder_path = "/home/cyril/Downloads/llama_dataset"
data = datasets.load_dataset("text", data_dir=folder_path)
data.save_to_disk("/home/cyril/Downloads/data.hf")
data = datasets.load_from_disk("/home/cyril/Downloads/data.hf")
print(data)
```
Results in the following split:
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 2114
})
test: Dataset({
features: ['text'],
num_rows: 200882
})
validation: Dataset({
features: ['text'],
num_rows: 152
})
})
```
It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split
### Expected behavior
Train split should have the bulk of the training examples.
### Environment info
datasets 2.11.0, python 3.10.6
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5721/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7125
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7125/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7125/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7125/events
|
https://github.com/huggingface/datasets/pull/7125
| 2,485,912,246
|
PR_kwDODunzps55Y4TM
| 7,125
|
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005741 / 0.011353 (-0.005612) | 0.004011 / 0.011008 (-0.006998) | 0.063962 / 0.038508 (0.025454) | 0.031512 / 0.023109 (0.008403) | 0.242249 / 0.275898 (-0.033649) | 0.269601 / 0.323480 (-0.053879) | 0.004502 / 0.007986 (-0.003483) | 0.002835 / 0.004328 (-0.001494) | 0.049878 / 0.004250 (0.045628) | 0.048012 / 0.037052 (0.010959) | 0.250454 / 0.258489 (-0.008035) | 0.283266 / 0.293841 (-0.010575) | 0.030752 / 0.128546 (-0.097794) | 0.012655 / 0.075646 (-0.062991) | 0.211043 / 0.419271 (-0.208229) | 0.037165 / 0.043533 (-0.006367) | 0.246815 / 0.255139 (-0.008324) | 0.264306 / 0.283200 (-0.018893) | 0.018343 / 0.141683 (-0.123340) | 1.140452 / 1.452155 (-0.311702) | 1.214849 / 1.492716 (-0.277867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098048 / 0.018006 (0.080042) | 0.292201 / 0.000490 (0.291712) | 0.000217 / 0.000200 (0.000017) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018732 / 0.037411 (-0.018679) | 0.062887 / 0.014526 (0.048361) | 0.074353 / 0.176557 (-0.102204) | 0.120794 / 0.737135 (-0.616341) | 0.077066 / 0.296338 (-0.219272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276335 / 0.215209 (0.061126) | 2.722905 / 2.077655 (0.645250) | 1.423080 / 1.504120 (-0.081040) | 1.305443 / 1.541195 (-0.235752) | 1.342142 / 1.468490 (-0.126348) | 0.741899 / 4.584777 (-3.842878) | 2.407567 / 3.745712 (-1.338145) | 3.070263 / 5.269862 (-2.199599) | 1.935732 / 4.565676 (-2.629944) | 0.081371 / 0.424275 (-0.342904) | 0.005207 / 0.007607 (-0.002401) | 0.328988 / 0.226044 (0.102943) | 3.240771 / 2.268929 (0.971842) | 1.801028 / 55.444624 (-53.643597) | 1.490593 / 6.876477 (-5.385884) | 1.521317 / 2.142072 (-0.620756) | 0.794051 / 4.805227 (-4.011176) | 0.136398 / 6.500664 (-6.364266) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974186 / 1.841788 (-0.867602) | 12.280011 / 8.074308 (4.205703) | 9.453389 / 10.191392 (-0.738003) | 0.132627 / 0.680424 (-0.547797) | 0.014608 / 0.534201 (-0.519593) | 0.309298 / 0.579283 (-0.269985) | 0.275911 / 0.434364 (-0.158452) | 0.348261 / 0.540337 (-0.192077) | 0.439031 / 1.386936 (-0.947905) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006248 / 0.011353 (-0.005105) | 0.004369 / 0.011008 (-0.006639) | 0.050588 / 0.038508 (0.012080) | 0.032880 / 0.023109 (0.009771) | 0.268979 / 0.275898 (-0.006919) | 0.294714 / 0.323480 (-0.028766) | 0.004518 / 0.007986 (-0.003467) | 0.002995 / 0.004328 (-0.001333) | 0.048776 / 0.004250 (0.044525) | 0.041696 / 0.037052 (0.004644) | 0.283413 / 0.258489 (0.024924) | 0.322137 / 0.293841 (0.028296) | 0.032809 / 0.128546 (-0.095737) | 0.012559 / 0.075646 (-0.063087) | 0.060456 / 0.419271 (-0.358815) | 0.034564 / 0.043533 (-0.008968) | 0.267263 / 0.255139 (0.012124) | 0.292633 / 0.283200 (0.009434) | 0.019011 / 0.141683 (-0.122672) | 1.199820 / 1.452155 (-0.252335) | 1.251829 / 1.492716 (-0.240887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097615 / 0.018006 (0.079609) | 0.313764 / 0.000490 (0.313274) | 0.000220 / 0.000200 (0.000020) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.089301 / 0.014526 (0.074775) | 0.092964 / 0.176557 (-0.083592) | 0.131724 / 0.737135 (-0.605412) | 0.094792 / 0.296338 (-0.201546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305119 / 0.215209 (0.089910) | 2.932192 / 2.077655 (0.854537) | 1.610573 / 1.504120 (0.106453) | 1.487502 / 1.541195 (-0.053693) | 1.533300 / 1.468490 (0.064810) | 0.717223 / 4.584777 (-3.867554) | 0.964402 / 3.745712 (-2.781310) | 3.111398 / 5.269862 (-2.158464) | 1.957942 / 4.565676 (-2.607734) | 0.079160 / 0.424275 (-0.345116) | 0.005639 / 0.007607 (-0.001968) | 0.358971 / 0.226044 (0.132927) | 3.564401 / 2.268929 (1.295472) | 2.043079 / 55.444624 (-53.401546) | 1.742681 / 6.876477 (-5.133795) | 1.784758 / 2.142072 (-0.357314) | 0.798508 / 4.805227 (-4.006719) | 0.133905 / 6.500664 (-6.366759) | 0.043008 / 0.075469 (-0.032461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031715 / 1.841788 (-0.810073) | 13.374312 / 8.074308 (5.300004) | 10.789098 / 10.191392 (0.597706) | 0.133663 / 0.680424 (-0.546761) | 0.016692 / 0.534201 (-0.517509) | 0.304716 / 0.579283 (-0.274567) | 0.129074 / 0.434364 (-0.305290) | 0.346440 / 0.540337 (-0.193897) | 0.464593 / 1.386936 (-0.922343) |\n\n</details>\n</details>\n\n\n"
] | 2024-08-26T05:09:35Z
| 2024-08-26T05:33:15Z
| 2024-08-26T05:27:09Z
|
MEMBER
| null | null | null |
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7125/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7125/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7125",
"merged_at": "2024-08-26T05:27:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7125"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5682
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5682/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5682/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5682/events
|
https://github.com/huggingface/datasets/issues/5682
| 1,646,000,571
|
I_kwDODunzps5iG_m7
| 5,682
|
ValueError when passing ignore_verifications
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-03-29T15:00:30Z
| 2023-03-29T17:28:58Z
| 2023-03-29T17:28:58Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError:
```
ValueError: 'none' is not a valid VerificationMode
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5682/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5682/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5106
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5106/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5106/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5106/events
|
https://github.com/huggingface/datasets/pull/5106
| 1,406,635,758
|
PR_kwDODunzps5ArM6G
| 5,106
|
Fix task template reload from dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> Just wondering if there might be other data classes default values missed that could cause an issue... Apart from feature-like classes and tasks, I don't see any others though...\r\n\r\nI think we're good ! `asdict` is used on the DatasetInfo attributes like features, tasks etc. and they all support dict conversion properly now\r\n\r\n> And a question: but this information about the tasks is no longer being saved as YAML tags in the dataset card; won't be a problem with current datasets using task templates (with this information in their metadata JSON) once we replace the JSON by the YAML tags (which do not have this information about the task templates)?\r\n\r\nIn the long run we'll use the train_eval_index YAML tags instead, but I agree when removing the JSON files we should try to not break existing code that may rely on this"
] | 2022-10-12T18:33:49Z
| 2022-10-13T09:59:07Z
| 2022-10-13T09:56:51Z
|
MEMBER
| null | null | null |
Since #4926 the JSON dumps are simplified and it made task template dicts empty by default.
I fixed this by always including the task name which is needed to reload a task from a dict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5106/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5106/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5106",
"merged_at": "2022-10-13T09:56:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5106"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4602
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4602/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4602/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4602/events
|
https://github.com/huggingface/datasets/pull/4602
| 1,289,950,379
|
PR_kwDODunzps46obqi
| 4,602
|
Upgrade setuptools in windows CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-30T10:48:41Z
| 2023-09-24T10:05:10Z
| 2022-06-30T12:46:17Z
|
MEMBER
| null | null | null |
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
hopefully this fixes the issue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4602/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4602/timeline
| null | null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4602",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4602"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5448
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5448/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5448/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5448/events
|
https://github.com/huggingface/datasets/issues/5448
| 1,550,618,514
|
I_kwDODunzps5cbI-S
| 5,448
|
Support fsspec 2023.1.0 in CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-01-20T10:26:31Z
| 2023-01-20T13:26:05Z
| 2023-01-20T13:26:05Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Once we find out the root cause of:
- #5445
we should revert the temporary pin on fsspec introduced by:
- #5447
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5448/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5448/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6686
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6686/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6686/events
|
https://github.com/huggingface/datasets/issues/6686
| 2,147,795,103
|
I_kwDODunzps6ABMCf
| 6,686
|
Question: Is there any way for uploading a large image dataset?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4",
"events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}",
"followers_url": "https://api.github.com/users/zhjohnchan/followers",
"following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}",
"gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhjohnchan",
"id": 37367987,
"login": "zhjohnchan",
"node_id": "MDQ6VXNlcjM3MzY3OTg3",
"organizations_url": "https://api.github.com/users/zhjohnchan/orgs",
"received_events_url": "https://api.github.com/users/zhjohnchan/received_events",
"repos_url": "https://api.github.com/users/zhjohnchan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhjohnchan",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"```\r\nimport pandas as pd\r\nfrom datasets import Dataset, Image\r\n\r\n# Read the CSV file\r\ndata = pd.read_csv(\"XXXX.csv\")\r\n\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_pandas(data)\r\ndataset = dataset.cast_column(\"file_name\", Image())\r\n\r\n# Upload to Hugging Face Hub (make sure authentication is set up)\r\ndataset.push_to_hub(\"XXXXX\"\")\r\n```\r\n\r\nstuck in \"Casting the dataset\r\n\r\n\"\r\n"
] | 2024-02-21T22:07:21Z
| 2024-05-02T03:44:59Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB")
```
where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it.
Thanks in advance!
Best,
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6686/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4653
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4653/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4653/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4653/events
|
https://github.com/huggingface/datasets/issues/4653
| 1,296,702,834
|
I_kwDODunzps5NSh1y
| 4,653
|
Add Altlex dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
closed
| false
| null |
[] | null |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)."
] | 2022-07-07T02:23:02Z
| 2022-07-14T02:12:39Z
| 2022-07-14T02:12:39Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Adding a Dataset
- **Name:** *Altlex*
- **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.β*
- **Paper:** *https://aclanthology.org/P16-1135.pdf*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4653/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4653/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6136
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6136/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6136/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6136/events
|
https://github.com/huggingface/datasets/issues/6136
| 1,844,887,866
|
I_kwDODunzps5t9sE6
| 6,136
|
CI check_code_quality error: E721 Do not compare types, use `isinstance()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-08-10T10:19:50Z
| 2023-08-10T11:22:58Z
| 2023-08-10T11:22:58Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error:
```
src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()`
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6136/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6136/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6271
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6271/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6271/events
|
https://github.com/huggingface/datasets/issues/6271
| 1,920,420,295
|
I_kwDODunzps5yd0nH
| 6,271
|
Overwriting Split overwrites data but not metadata, corrupting dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4",
"events_url": "https://api.github.com/users/govindrai/events{/privacy}",
"followers_url": "https://api.github.com/users/govindrai/followers",
"following_url": "https://api.github.com/users/govindrai/following{/other_user}",
"gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/govindrai",
"id": 13859249,
"login": "govindrai",
"node_id": "MDQ6VXNlcjEzODU5MjQ5",
"organizations_url": "https://api.github.com/users/govindrai/orgs",
"received_events_url": "https://api.github.com/users/govindrai/received_events",
"repos_url": "https://api.github.com/users/govindrai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/govindrai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/govindrai",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-09-30T22:37:31Z
| 2023-10-16T13:30:50Z
| 2023-10-16T13:30:50Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below.
**Current Behavior**
When I push to an existing split I get this error:
`ValueError: Split complexRoofLocation_01Apr2023_to_31May2023test already present`
This seems to suggest that the library doesn't support overwriting splits.
**Potential Bug**
Whatβs strange is that datasets, despite the operation erroring out with the ValueError above, does, in fact, overwrite the split:
`Pushing dataset shards to the dataset hub: 100% [.....................] 1/1 [00:00<00:00, 55.04it/s]`
Even though you got an error message and your code fails, your dataset is now changed. That seems like a bug. Either don't change the dataset, or don't throw the error and allow the script to proceed.
Additional Bug
While it overwrites the split, it doesnβt overwrite the splitβs information. Because of this when you pull down the dataset you may end up getting a `NonMatchingSplitsSizesError` if the size of the dataset during the overwrite is different. For example, my original split had 5 rows, but on my overwrite, I only had 4. Then when I try to download the dataset, I get a `NonMatchingSplitsSizesError` because the dataset's data.json states thereβs 5 but only 4 exist in the split.
Expected Behavior
This corrupts the dataset rendering it unusable (until you take manual intervention). Either the library should let the overwrite happen (which it does but should also update the metadata) or it shouldnβt do anything.
### Steps to reproduce the bug
[Colab Notebook](https://colab.research.google.com/drive/1bqVkD06Ngs9MQNdSk_ygCG6y1UqXA4pC?usp=sharing)
### Expected behavior
The split should be overwritten and I should be able to use the new version of the dataset without issue.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6271/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7126
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7126/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7126/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7126/events
|
https://github.com/huggingface/datasets/pull/7126
| 2,485,939,495
|
PR_kwDODunzps55Y-Ws
| 7,126
|
Disable implicit token in CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003428 / 0.011008 (-0.007580) | 0.062673 / 0.038508 (0.024164) | 0.030111 / 0.023109 (0.007002) | 0.238017 / 0.275898 (-0.037881) | 0.262655 / 0.323480 (-0.060825) | 0.003015 / 0.007986 (-0.004971) | 0.002664 / 0.004328 (-0.001665) | 0.050010 / 0.004250 (0.045759) | 0.045620 / 0.037052 (0.008567) | 0.251800 / 0.258489 (-0.006689) | 0.278829 / 0.293841 (-0.015011) | 0.029838 / 0.128546 (-0.098709) | 0.011703 / 0.075646 (-0.063943) | 0.204503 / 0.419271 (-0.214768) | 0.036173 / 0.043533 (-0.007359) | 0.242850 / 0.255139 (-0.012289) | 0.263811 / 0.283200 (-0.019389) | 0.019027 / 0.141683 (-0.122656) | 1.168028 / 1.452155 (-0.284126) | 1.208975 / 1.492716 (-0.283742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091309 / 0.018006 (0.073303) | 0.299583 / 0.000490 (0.299093) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018451 / 0.037411 (-0.018960) | 0.062516 / 0.014526 (0.047991) | 0.073983 / 0.176557 (-0.102573) | 0.120952 / 0.737135 (-0.616184) | 0.075275 / 0.296338 (-0.221063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286870 / 0.215209 (0.071661) | 2.810498 / 2.077655 (0.732843) | 1.490028 / 1.504120 (-0.014092) | 1.362249 / 1.541195 (-0.178946) | 1.368939 / 1.468490 (-0.099551) | 0.736643 / 4.584777 (-3.848134) | 2.414237 / 3.745712 (-1.331475) | 2.898911 / 5.269862 (-2.370951) | 1.840630 / 4.565676 (-2.725047) | 0.077872 / 0.424275 (-0.346403) | 0.005087 / 0.007607 (-0.002520) | 0.337054 / 0.226044 (0.111009) | 3.390734 / 2.268929 (1.121806) | 1.844174 / 55.444624 (-53.600451) | 1.532741 / 6.876477 (-5.343736) | 1.551650 / 2.142072 (-0.590422) | 0.778642 / 4.805227 (-4.026585) | 0.131899 / 6.500664 (-6.368765) | 0.041801 / 0.075469 (-0.033668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958362 / 1.841788 (-0.883425) | 11.323330 / 8.074308 (3.249022) | 9.396199 / 10.191392 (-0.795193) | 0.131154 / 0.680424 (-0.549270) | 0.014705 / 0.534201 (-0.519496) | 0.302424 / 0.579283 (-0.276859) | 0.261870 / 0.434364 (-0.172494) | 0.340788 / 0.540337 (-0.199550) | 0.433360 / 1.386936 (-0.953576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005571 / 0.011353 (-0.005782) | 0.003388 / 0.011008 (-0.007621) | 0.050366 / 0.038508 (0.011858) | 0.032633 / 0.023109 (0.009524) | 0.261847 / 0.275898 (-0.014051) | 0.292197 / 0.323480 (-0.031283) | 0.005070 / 0.007986 (-0.002916) | 0.002753 / 0.004328 (-0.001575) | 0.048613 / 0.004250 (0.044363) | 0.040272 / 0.037052 (0.003219) | 0.275441 / 0.258489 (0.016952) | 0.309175 / 0.293841 (0.015334) | 0.032403 / 0.128546 (-0.096143) | 0.011734 / 0.075646 (-0.063912) | 0.059532 / 0.419271 (-0.359740) | 0.033886 / 0.043533 (-0.009647) | 0.263453 / 0.255139 (0.008314) | 0.281997 / 0.283200 (-0.001203) | 0.018522 / 0.141683 (-0.123161) | 1.150364 / 1.452155 (-0.301791) | 1.204090 / 1.492716 (-0.288627) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093129 / 0.018006 (0.075123) | 0.303691 / 0.000490 (0.303201) | 0.000231 / 0.000200 (0.000031) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022084 / 0.037411 (-0.015327) | 0.076354 / 0.014526 (0.061828) | 0.087710 / 0.176557 (-0.088847) | 0.128907 / 0.737135 (-0.608228) | 0.088603 / 0.296338 (-0.207735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301161 / 0.215209 (0.085952) | 2.954780 / 2.077655 (0.877125) | 1.601366 / 1.504120 (0.097246) | 1.477225 / 1.541195 (-0.063970) | 1.482355 / 1.468490 (0.013865) | 0.722461 / 4.584777 (-3.862315) | 0.981439 / 3.745712 (-2.764273) | 2.927006 / 5.269862 (-2.342856) | 1.884444 / 4.565676 (-2.681233) | 0.079044 / 0.424275 (-0.345231) | 0.005530 / 0.007607 (-0.002077) | 0.347082 / 0.226044 (0.121037) | 3.491984 / 2.268929 (1.223056) | 1.944317 / 55.444624 (-53.500307) | 1.645792 / 6.876477 (-5.230685) | 1.649506 / 2.142072 (-0.492567) | 0.800822 / 4.805227 (-4.004405) | 0.133936 / 6.500664 (-6.366729) | 0.041198 / 0.075469 (-0.034271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029764 / 1.841788 (-0.812024) | 11.928840 / 8.074308 (3.854532) | 10.021390 / 10.191392 (-0.170002) | 0.141608 / 0.680424 (-0.538816) | 0.014921 / 0.534201 (-0.519280) | 0.302050 / 0.579283 (-0.277233) | 0.124151 / 0.434364 (-0.310213) | 0.347143 / 0.540337 (-0.193195) | 0.467649 / 1.386936 (-0.919287) |\n\n</details>\n</details>\n\n\n"
] | 2024-08-26T05:29:46Z
| 2024-08-26T06:05:01Z
| 2024-08-26T05:59:15Z
|
MEMBER
| null | null | null |
Disable implicit token in CI.
This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in:
- #7124
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7126/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7126/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7126",
"merged_at": "2024-08-26T05:59:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7126"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4798
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4798/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4798/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4798/events
|
https://github.com/huggingface/datasets/pull/4798
| 1,330,699,942
|
PR_kwDODunzps48wVEG
| 4,798
|
Shard generator
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/43296932?v=4",
"events_url": "https://api.github.com/users/marianna13/events{/privacy}",
"followers_url": "https://api.github.com/users/marianna13/followers",
"following_url": "https://api.github.com/users/marianna13/following{/other_user}",
"gists_url": "https://api.github.com/users/marianna13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marianna13",
"id": 43296932,
"login": "marianna13",
"node_id": "MDQ6VXNlcjQzMjk2OTMy",
"organizations_url": "https://api.github.com/users/marianna13/orgs",
"received_events_url": "https://api.github.com/users/marianna13/received_events",
"repos_url": "https://api.github.com/users/marianna13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marianna13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marianna13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marianna13",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi, thanks!\r\n\r\n> I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to \"split\" these large datasets into chunks with equal size\r\n\r\n`map`, the method we use for processing in `datasets`, already does that if `batched=True`. And you can control the batch size with `batch_size`.\r\n\r\n> Even better - be able to run through these chunks one by one in simple and convenient way\r\n\r\nIt's not hard to do this \"manually\" with the existing API:\r\n```python\r\nbatch_size = <BATCH_SIZE>\r\nfor i in range(len(dset) // batch_size)\r\n shard = dset[i * batch_size:(i+1) * batch_size] # a dict of lists\r\n shard = Dataset.from_dict(shard)\r\n```\r\n(should be of similar performance to your implementation)\r\n\r\nStill, I think an API like that could be useful if implemented efficiently (see [this](https://discuss.huggingface.co/t/why-is-it-so-slow-to-access-data-through-iteration-with-hugginface-dataset/20385) discussion to understand what's the issue with `select`/`__getitem__` on which your implementation relies on), which can be done with `pa.Table.to_reader` in PyArrow 8.0.0+, .\r\n\r\n@lhoestq @albertvillanova wdyt? We could use such API to efficiently iterate over the batches in `map` before processing them.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4798). All of your documentation changes will be reflected on that endpoint.",
"This is more efficient since it doesn't bring the data in memory:\r\n```python\r\nfor i in range(len(dset) // batch_size)\r\n start = i * batch_size\r\n end = min((i+1) * batch_size, len(dset))\r\n shard = dset.select(range(start, end))\r\n```\r\n\r\n@marianna13 can you give more details on when it would be handy to have this shard generator ?",
"> This is more efficient since it doesn't bring the data in memory:\r\n> \r\n> ```python\r\n> for i in range(len(dset) // batch_size)\r\n> start = i * batch_size\r\n> end = min((i+1) * batch_size, len(dset))\r\n> shard = dset.select(range(start, end))\r\n> ```\r\n> \r\n> @marianna13 can you give more details on when it would be handy to have this shard generator ?\r\n\r\nSure! I used such generator when I needed to process a very large dataset (>1TB) in parallel, I've found out empirically that it's much more efficient to do that by processing only one part of the dataset with the shard generator. I tried to use a map with batching but it causesd oom errors, I tried to use the normal shard and here's what I came up with. So I thought it might be helpful to someone else!",
"I see thanks ! `map` should work just fine even at this scale, feel free to open an issue if you'd like to discuss your OOM issue.\r\n\r\nRegarding `shard_generator`, since it is pretty straightforward to get shards I'm not sure we need that extra Dataset method",
"Hi again! We've just added `_iter_batches(batch_size)` to the `Dataset` API for fast iteration over batches/chunks, so I think we can close this PR. Compared to this implementation, `_iter_batches` leverages `pa.Table.to_reader` for chunking, which makes it significantly faster."
] | 2022-08-06T09:14:06Z
| 2022-10-03T15:35:10Z
| 2022-10-03T15:35:10Z
|
NONE
| null | null | null |
Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided to add the method called shard_generator() to the main Dataset class. It works similar to shard method but it returns a generator of datasets with equal size (defined by shard_size attribute).
Example:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds
Dataset({
features: ['text', 'label'],
num_rows: 1066
})
>>> next(ds.shard_generator(300))
Dataset({
features: ['text', 'label'],
num_rows: 300
})
```
I hope it can be helpful to someone. Thanks!
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4798/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4798/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4798",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4798"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5626
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5626/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5626/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5626/events
|
https://github.com/huggingface/datasets/pull/5626
| 1,619,252,984
|
PR_kwDODunzps5LyBT4
| 5,626
|
Support streaming datasets with numpy.load
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006607 / 0.011353 (-0.004746) | 0.004610 / 0.011008 (-0.006398) | 0.100673 / 0.038508 (0.062165) | 0.027739 / 0.023109 (0.004630) | 0.326290 / 0.275898 (0.050392) | 0.344296 / 0.323480 (0.020816) | 0.005021 / 0.007986 (-0.002964) | 0.003327 / 0.004328 (-0.001002) | 0.077779 / 0.004250 (0.073529) | 0.040237 / 0.037052 (0.003185) | 0.308992 / 0.258489 (0.050503) | 0.355017 / 0.293841 (0.061176) | 0.031203 / 0.128546 (-0.097343) | 0.011749 / 0.075646 (-0.063898) | 0.327431 / 0.419271 (-0.091840) | 0.043033 / 0.043533 (-0.000500) | 0.309713 / 0.255139 (0.054574) | 0.336550 / 0.283200 (0.053351) | 0.084891 / 0.141683 (-0.056792) | 1.555641 / 1.452155 (0.103487) | 1.613214 / 1.492716 (0.120497) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216269 / 0.018006 (0.198262) | 0.422066 / 0.000490 (0.421576) | 0.004055 / 0.000200 (0.003855) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023759 / 0.037411 (-0.013652) | 0.096937 / 0.014526 (0.082411) | 0.105312 / 0.176557 (-0.071244) | 0.167840 / 0.737135 (-0.569295) | 0.107998 / 0.296338 (-0.188340) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458315 / 0.215209 (0.243106) | 4.584803 / 2.077655 (2.507148) | 2.193641 / 1.504120 (0.689521) | 1.981494 / 1.541195 (0.440299) | 2.020358 / 1.468490 (0.551868) | 0.696763 / 4.584777 (-3.888014) | 3.388432 / 3.745712 (-0.357280) | 3.335038 / 5.269862 (-1.934823) | 1.648551 / 4.565676 (-2.917126) | 0.083753 / 0.424275 (-0.340522) | 0.012855 / 0.007607 (0.005248) | 0.562331 / 0.226044 (0.336286) | 5.649259 / 2.268929 (3.380330) | 2.680309 / 55.444624 (-52.764315) | 2.319297 / 6.876477 (-4.557180) | 2.444016 / 2.142072 (0.301943) | 0.809821 / 4.805227 (-3.995407) | 0.152855 / 6.500664 (-6.347809) | 0.067756 / 0.075469 (-0.007713) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.213318 / 1.841788 (-0.628470) | 13.887822 / 8.074308 (5.813514) | 14.276325 / 10.191392 (4.084933) | 0.156227 / 0.680424 (-0.524197) | 0.016377 / 0.534201 (-0.517824) | 0.377080 / 0.579283 (-0.202203) | 0.386561 / 0.434364 (-0.047803) | 0.435631 / 0.540337 (-0.104707) | 0.520863 / 1.386936 (-0.866073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006740 / 0.011353 (-0.004613) | 0.004704 / 0.011008 (-0.006304) | 0.076840 / 0.038508 (0.038331) | 0.027519 / 0.023109 (0.004409) | 0.343219 / 0.275898 (0.067321) | 0.376810 / 0.323480 (0.053330) | 0.005048 / 0.007986 (-0.002938) | 0.003356 / 0.004328 (-0.000972) | 0.077098 / 0.004250 (0.072848) | 0.038601 / 0.037052 (0.001548) | 0.345723 / 0.258489 (0.087233) | 0.388635 / 0.293841 (0.094794) | 0.033612 / 0.128546 (-0.094934) | 0.011689 / 0.075646 (-0.063957) | 0.086446 / 0.419271 (-0.332825) | 0.044390 / 0.043533 (0.000857) | 0.343763 / 0.255139 (0.088624) | 0.368591 / 0.283200 (0.085392) | 0.091605 / 0.141683 (-0.050078) | 1.478615 / 1.452155 (0.026461) | 1.580858 / 1.492716 (0.088142) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223547 / 0.018006 (0.205541) | 0.411243 / 0.000490 (0.410753) | 0.000916 / 0.000200 (0.000716) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025223 / 0.037411 (-0.012189) | 0.100970 / 0.014526 (0.086445) | 0.108178 / 0.176557 (-0.068378) | 0.156827 / 0.737135 (-0.580308) | 0.111431 / 0.296338 (-0.184907) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434168 / 0.215209 (0.218959) | 4.361874 / 2.077655 (2.284219) | 2.060735 / 1.504120 (0.556615) | 1.861100 / 1.541195 (0.319906) | 1.920692 / 1.468490 (0.452202) | 0.697909 / 4.584777 (-3.886868) | 3.477036 / 3.745712 (-0.268676) | 3.002469 / 5.269862 (-2.267392) | 1.449325 / 4.565676 (-3.116351) | 0.083034 / 0.424275 (-0.341241) | 0.012805 / 0.007607 (0.005198) | 0.531391 / 0.226044 (0.305347) | 5.323015 / 2.268929 (3.054086) | 2.488605 / 55.444624 (-52.956020) | 2.158254 / 6.876477 (-4.718222) | 2.189633 / 2.142072 (0.047560) | 0.805972 / 4.805227 (-3.999256) | 0.153105 / 6.500664 (-6.347559) | 0.068909 / 0.075469 (-0.006561) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276851 / 1.841788 (-0.564937) | 14.431510 / 8.074308 (6.357202) | 14.544788 / 10.191392 (4.353396) | 0.146589 / 0.680424 (-0.533835) | 0.016890 / 0.534201 (-0.517311) | 0.379897 / 0.579283 (-0.199387) | 0.389153 / 0.434364 (-0.045211) | 0.440097 / 0.540337 (-0.100241) | 0.524191 / 1.386936 (-0.862745) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-10T16:33:39Z
| 2023-03-21T06:36:05Z
| 2023-03-21T06:28:54Z
|
MEMBER
| null | null | null |
Support streaming datasets with `numpy.load`.
See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5626/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5626/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5626.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5626",
"merged_at": "2023-03-21T06:28:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5626.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5626"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7528
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7528/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7528/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7528/events
|
https://github.com/huggingface/datasets/issues/7528
| 3,006,433,485
|
I_kwDODunzps6zMojN
| 7,528
|
Data Studio Error: Convert JSONL incorrectly
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/144962041?v=4",
"events_url": "https://api.github.com/users/zxccade/events{/privacy}",
"followers_url": "https://api.github.com/users/zxccade/followers",
"following_url": "https://api.github.com/users/zxccade/following{/other_user}",
"gists_url": "https://api.github.com/users/zxccade/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zxccade",
"id": 144962041,
"login": "zxccade",
"node_id": "U_kgDOCKPx-Q",
"organizations_url": "https://api.github.com/users/zxccade/orgs",
"received_events_url": "https://api.github.com/users/zxccade/received_events",
"repos_url": "https://api.github.com/users/zxccade/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zxccade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zxccade/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zxccade",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-04-19T13:21:44Z
| 2025-04-19T13:21:44Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could you help me address the issue?
Many thanks,
### Steps to reproduce the bug
The JSONL file of [V_STaR_test_release.jsonl](https://huggingface.co/datasets/V-STaR-Bench/V-STaR/blob/main/V_STaR_test_release.jsonl) has the correct values of every "bboxes" for each sample.
But in the Data Studio, we can see that the values of "bboxes" have changed, and load the dataset via API will also get the wrong values.
### Expected behavior
Fix the bug to correctly download my dataset.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-427.22.1.el9_4.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.16
- `huggingface_hub` version: 0.29.3
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2023.10.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7528/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7528/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6910/events
|
https://github.com/huggingface/datasets/pull/6910
| 2,307,570,084
|
PR_kwDODunzps5wC2An
| 6,910
|
Fix wrong type hints in data_files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6910). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005135 / 0.011353 (-0.006218) | 0.003757 / 0.011008 (-0.007251) | 0.063122 / 0.038508 (0.024614) | 0.029837 / 0.023109 (0.006727) | 0.246120 / 0.275898 (-0.029778) | 0.268529 / 0.323480 (-0.054951) | 0.004136 / 0.007986 (-0.003849) | 0.002650 / 0.004328 (-0.001678) | 0.048749 / 0.004250 (0.044499) | 0.045279 / 0.037052 (0.008226) | 0.257970 / 0.258489 (-0.000519) | 0.285993 / 0.293841 (-0.007848) | 0.027612 / 0.128546 (-0.100935) | 0.010175 / 0.075646 (-0.065471) | 0.207373 / 0.419271 (-0.211899) | 0.037672 / 0.043533 (-0.005861) | 0.249603 / 0.255139 (-0.005536) | 0.271081 / 0.283200 (-0.012119) | 0.018174 / 0.141683 (-0.123509) | 1.116703 / 1.452155 (-0.335452) | 1.169261 / 1.492716 (-0.323455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095161 / 0.018006 (0.077155) | 0.301112 / 0.000490 (0.300623) | 0.000221 / 0.000200 (0.000021) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023218 / 0.037411 (-0.014193) | 0.063125 / 0.014526 (0.048599) | 0.075857 / 0.176557 (-0.100699) | 0.137922 / 0.737135 (-0.599213) | 0.076989 / 0.296338 (-0.219349) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.279272 / 0.215209 (0.064063) | 2.776463 / 2.077655 (0.698809) | 1.472220 / 1.504120 (-0.031900) | 1.347105 / 1.541195 (-0.194090) | 1.361014 / 1.468490 (-0.107476) | 0.589233 / 4.584777 (-3.995544) | 2.395212 / 3.745712 (-1.350500) | 2.794855 / 5.269862 (-2.475007) | 1.698350 / 4.565676 (-2.867327) | 0.063328 / 0.424275 (-0.360947) | 0.005020 / 0.007607 (-0.002588) | 0.335872 / 0.226044 (0.109828) | 3.293486 / 2.268929 (1.024558) | 1.837270 / 55.444624 (-53.607354) | 1.535694 / 6.876477 (-5.340782) | 1.559696 / 2.142072 (-0.582376) | 0.639302 / 4.805227 (-4.165925) | 0.116554 / 6.500664 (-6.384110) | 0.042305 / 0.075469 (-0.033164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.971562 / 1.841788 (-0.870226) | 11.710500 / 8.074308 (3.636192) | 9.505935 / 10.191392 (-0.685457) | 0.139161 / 0.680424 (-0.541263) | 0.014351 / 0.534201 (-0.519850) | 0.285790 / 0.579283 (-0.293493) | 0.265718 / 0.434364 (-0.168646) | 0.323558 / 0.540337 (-0.216780) | 0.412635 / 1.386936 (-0.974301) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005987 / 0.011353 (-0.005366) | 0.003787 / 0.011008 (-0.007221) | 0.049839 / 0.038508 (0.011331) | 0.032817 / 0.023109 (0.009708) | 0.268304 / 0.275898 (-0.007594) | 0.303409 / 0.323480 (-0.020071) | 0.004924 / 0.007986 (-0.003061) | 0.002740 / 0.004328 (-0.001589) | 0.048906 / 0.004250 (0.044655) | 0.044266 / 0.037052 (0.007213) | 0.290506 / 0.258489 (0.032017) | 0.314124 / 0.293841 (0.020283) | 0.030242 / 0.128546 (-0.098304) | 0.010555 / 0.075646 (-0.065091) | 0.058849 / 0.419271 (-0.360423) | 0.033540 / 0.043533 (-0.009993) | 0.267833 / 0.255139 (0.012694) | 0.291056 / 0.283200 (0.007857) | 0.018611 / 0.141683 (-0.123072) | 1.137620 / 1.452155 (-0.314534) | 1.199554 / 1.492716 (-0.293162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096716 / 0.018006 (0.078709) | 0.302033 / 0.000490 (0.301543) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023208 / 0.037411 (-0.014203) | 0.076231 / 0.014526 (0.061705) | 0.088672 / 0.176557 (-0.087884) | 0.129033 / 0.737135 (-0.608103) | 0.090709 / 0.296338 (-0.205630) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297033 / 0.215209 (0.081824) | 2.951181 / 2.077655 (0.873526) | 1.567690 / 1.504120 (0.063570) | 1.436809 / 1.541195 (-0.104385) | 1.469696 / 1.468490 (0.001206) | 0.567963 / 4.584777 (-4.016813) | 0.954168 / 3.745712 (-2.791544) | 2.700473 / 5.269862 (-2.569389) | 1.742144 / 4.565676 (-2.823532) | 0.065027 / 0.424275 (-0.359248) | 0.005319 / 0.007607 (-0.002288) | 0.346459 / 0.226044 (0.120415) | 3.446117 / 2.268929 (1.177189) | 1.953142 / 55.444624 (-53.491483) | 1.639131 / 6.876477 (-5.237346) | 1.830664 / 2.142072 (-0.311409) | 0.657807 / 4.805227 (-4.147420) | 0.117987 / 6.500664 (-6.382678) | 0.040726 / 0.075469 (-0.034744) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.992666 / 1.841788 (-0.849122) | 12.305377 / 8.074308 (4.231069) | 10.274829 / 10.191392 (0.083437) | 0.141731 / 0.680424 (-0.538692) | 0.015100 / 0.534201 (-0.519101) | 0.282298 / 0.579283 (-0.296985) | 0.124301 / 0.434364 (-0.310063) | 0.320914 / 0.540337 (-0.219424) | 0.445855 / 1.386936 (-0.941081) |\n\n</details>\n</details>\n\n\n"
] | 2024-05-21T07:41:09Z
| 2024-05-23T06:04:05Z
| 2024-05-23T05:58:05Z
|
MEMBER
| null | null | null |
Fix wrong type hints in data_files introduced in:
- #6493
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6910/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6910/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6910",
"merged_at": "2024-05-23T05:58:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6910"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4837/events
|
https://github.com/huggingface/datasets/pull/4837
| 1,337,079,723
|
PR_kwDODunzps49Fb6l
| 4,837
|
Add support for CSV metadata files to ImageFolder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool thanks ! Maybe let's include this change after the refactoring from FolderBasedBuilder in #3963 to avoid dealing with too many unpleasant conflicts ?",
"@lhoestq I resolved the conflicts (AudioFolder also supports CSV metadata now). Let me know what you think.\r\n",
"@lhoestq Thanks for the suggestion! Indeed it makes more sense to use CSV as the default format in the folder-based builders."
] | 2022-08-12T11:19:18Z
| 2022-08-31T12:01:27Z
| 2022-08-31T11:59:07Z
|
COLLABORATOR
| null | null | null |
Fix #4814
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4837/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4837/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4837.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4837",
"merged_at": "2022-08-31T11:59:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4837.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4837"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5217
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5217/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5217/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5217/events
|
https://github.com/huggingface/datasets/pull/5217
| 1,441,252,740
|
PR_kwDODunzps5CetXs
| 5,217
|
Reword E2E training and inference tips in the vision guides
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-09T02:40:01Z
| 2022-11-10T01:38:09Z
| 2022-11-10T01:36:09Z
|
MEMBER
| null | null | null |
Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4",
"events_url": "https://api.github.com/users/sayakpaul/events{/privacy}",
"followers_url": "https://api.github.com/users/sayakpaul/followers",
"following_url": "https://api.github.com/users/sayakpaul/following{/other_user}",
"gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sayakpaul",
"id": 22957388,
"login": "sayakpaul",
"node_id": "MDQ6VXNlcjIyOTU3Mzg4",
"organizations_url": "https://api.github.com/users/sayakpaul/orgs",
"received_events_url": "https://api.github.com/users/sayakpaul/received_events",
"repos_url": "https://api.github.com/users/sayakpaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sayakpaul",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5217/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5217/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5217.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5217",
"merged_at": "2022-11-10T01:36:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5217.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5217"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5172
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5172/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5172/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5172/events
|
https://github.com/huggingface/datasets/issues/5172
| 1,425,523,114
|
I_kwDODunzps5U98Gq
| 5,172
|
Inconsistency behavior between handling local file protocol and other FS protocols
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"events_url": "https://api.github.com/users/leoleoasd/events{/privacy}",
"followers_url": "https://api.github.com/users/leoleoasd/followers",
"following_url": "https://api.github.com/users/leoleoasd/following{/other_user}",
"gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leoleoasd",
"id": 37735580,
"login": "leoleoasd",
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"organizations_url": "https://api.github.com/users/leoleoasd/orgs",
"received_events_url": "https://api.github.com/users/leoleoasd/received_events",
"repos_url": "https://api.github.com/users/leoleoasd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leoleoasd",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2022-10-27T12:03:20Z
| 2024-05-08T19:31:13Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
These lines us used during load_from_disk:
```
if is_remote_filesystem(fs):
dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path)
else:
fs = fsspec.filesystem("file")
dest_dataset_dict_path = dataset_dict_path
```
If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`.
### Steps to reproduce the bug
```
import fsspec.core
url = "hdfs:///somewhere/MNIST"
# url = "file:///somewhere/MNIST"
fs, path = fsspec.core.url_to_fs(url)
fs.ls(path) # this will always work
load_from_disk(path, fs) # only works for local FS
load_from_disk(url, fs) # only works for remote FS
```
### Expected behavior
one of `url` or `path` should always work
I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since:
```
fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST'
```
and
```
fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST")
```
In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too)
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.4.205.1**HIDDEN**
- Python version: 3.7.10
- PyArrow version: 8.0.0
- Pandas version: 1.2.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5172/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5172/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5666
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5666/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5666/events
|
https://github.com/huggingface/datasets/issues/5666
| 1,637,675,062
|
I_kwDODunzps5hnPA2
| 5,666
|
Support tensorflow 2.12.0 in CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-03-23T14:37:51Z
| 2023-03-23T16:14:54Z
| 2023-03-23T16:14:54Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5666/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7305
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7305/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7305/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7305/events
|
https://github.com/huggingface/datasets/issues/7305
| 2,715,907,267
|
I_kwDODunzps6h4XTD
| 7,305
|
Build Documentation Test Fails Due to "Bad Credentials" Error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31152346?v=4",
"events_url": "https://api.github.com/users/ruidazeng/events{/privacy}",
"followers_url": "https://api.github.com/users/ruidazeng/followers",
"following_url": "https://api.github.com/users/ruidazeng/following{/other_user}",
"gists_url": "https://api.github.com/users/ruidazeng/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ruidazeng",
"id": 31152346,
"login": "ruidazeng",
"node_id": "MDQ6VXNlcjMxMTUyMzQ2",
"organizations_url": "https://api.github.com/users/ruidazeng/orgs",
"received_events_url": "https://api.github.com/users/ruidazeng/received_events",
"repos_url": "https://api.github.com/users/ruidazeng/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ruidazeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruidazeng/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ruidazeng",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"how were you able to fix this please?",
"> how were you able to fix this please?\r\n\r\nI was not able to fix this."
] | 2024-12-03T20:22:54Z
| 2025-01-08T22:38:14Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors.
### Steps to reproduce the bug
1. Trigger the `build_main_documentation` job.
2. Observe the logs during the "Syncing repository" step.
### Expected behavior
The workflow should be able to retrieve the default branch name without encountering credential issues.
### Environment info
```plaintext
Syncing repository: huggingface/notebooks
Getting Git version info
Temporarily overriding HOME='/home/runner/work/_temp/00e62748-9940-4a4f-bbbc-eb2cda6d7ed6' before making global git config changes
Adding repository directory to the temporary git global config as a safe directory
/usr/bin/git config --global --add safe.directory /home/runner/work/datasets/datasets/notebooks
Initializing the repository
Disabling automatic garbage collection
Setting up auth
Determining the default branch
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 20 seconds before trying again
Retrieving the default branch name
Bad credentials - https://docs.github.com/rest
Waiting 19 seconds before trying again
Retrieving the default branch name
Error: Bad credentials - https://docs.github.com/rest
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7305/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7305/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6894
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6894/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6894/events
|
https://github.com/huggingface/datasets/issues/6894
| 2,292,840,226
|
I_kwDODunzps6Iqfci
| 6,894
|
Better document defaults of to_json
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2024-05-13T13:30:54Z
| 2024-05-16T14:31:27Z
| 2024-05-16T14:31:27Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6894/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4767
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4767/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4767/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4767/events
|
https://github.com/huggingface/datasets/pull/4767
| 1,321,843,538
|
PR_kwDODunzps48TCpI
| 4,767
|
Add 2.4.0 version added to docstrings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-29T07:01:56Z
| 2022-07-29T11:16:49Z
| 2022-07-29T11:03:58Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4767/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4767/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4767",
"merged_at": "2022-07-29T11:03:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4767"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7163
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7163/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7163/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7163/events
|
https://github.com/huggingface/datasets/issues/7163
| 2,542,361,234
|
I_kwDODunzps6XiVqS
| 7,163
|
Set explicit seed in iterable dataset ddp shuffling example
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"thanks for reporting !"
] | 2024-09-23T11:34:06Z
| 2024-09-24T14:40:15Z
| 2024-09-24T14:40:15Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset
the ddp example shuffles without seeding
```python
from datasets.distributed import split_dataset_by_node
ids = ds.to_iterable_dataset(num_shards=512)
ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating
dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating
for example in ids:
pass
```
This code would - I think - raise an error due to the lack of an explicit seed:
https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711
### Steps to reproduce the bug
Run example code
### Expected behavior
Add explicit seeding to example code
### Environment info
latest datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7163/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7163/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6817
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6817/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6817/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6817/events
|
https://github.com/huggingface/datasets/pull/6817
| 2,246,578,480
|
PR_kwDODunzps5s1RAN
| 6,817
|
Support indexable objects in `Dataset.__getitem__`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6817). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005464 / 0.011353 (-0.005889) | 0.004174 / 0.011008 (-0.006834) | 0.064252 / 0.038508 (0.025744) | 0.033305 / 0.023109 (0.010196) | 0.245831 / 0.275898 (-0.030067) | 0.275575 / 0.323480 (-0.047905) | 0.003359 / 0.007986 (-0.004626) | 0.004196 / 0.004328 (-0.000132) | 0.049961 / 0.004250 (0.045710) | 0.048940 / 0.037052 (0.011888) | 0.261037 / 0.258489 (0.002548) | 0.295329 / 0.293841 (0.001488) | 0.028570 / 0.128546 (-0.099976) | 0.010747 / 0.075646 (-0.064900) | 0.216021 / 0.419271 (-0.203251) | 0.036885 / 0.043533 (-0.006648) | 0.251169 / 0.255139 (-0.003970) | 0.286233 / 0.283200 (0.003034) | 0.021253 / 0.141683 (-0.120429) | 1.150669 / 1.452155 (-0.301485) | 1.187577 / 1.492716 (-0.305140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094443 / 0.018006 (0.076436) | 0.304410 / 0.000490 (0.303920) | 0.000213 / 0.000200 (0.000013) | 0.000041 / 0.000054 (-0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019568 / 0.037411 (-0.017844) | 0.065734 / 0.014526 (0.051208) | 0.076042 / 0.176557 (-0.100515) | 0.123624 / 0.737135 (-0.613511) | 0.078047 / 0.296338 (-0.218291) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.295725 / 0.215209 (0.080515) | 2.752501 / 2.077655 (0.674846) | 1.461856 / 1.504120 (-0.042264) | 1.353692 / 1.541195 (-0.187503) | 1.391777 / 1.468490 (-0.076713) | 0.563423 / 4.584777 (-4.021354) | 2.384620 / 3.745712 (-1.361092) | 2.876092 / 5.269862 (-2.393769) | 1.803913 / 4.565676 (-2.761763) | 0.062678 / 0.424275 (-0.361597) | 0.005428 / 0.007607 (-0.002179) | 0.333797 / 0.226044 (0.107753) | 3.304458 / 2.268929 (1.035530) | 1.801768 / 55.444624 (-53.642856) | 1.569406 / 6.876477 (-5.307070) | 1.614535 / 2.142072 (-0.527538) | 0.650178 / 4.805227 (-4.155049) | 0.119693 / 6.500664 (-6.380971) | 0.042832 / 0.075469 (-0.032637) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982035 / 1.841788 (-0.859753) | 12.390006 / 8.074308 (4.315698) | 10.127018 / 10.191392 (-0.064374) | 0.131963 / 0.680424 (-0.548461) | 0.013926 / 0.534201 (-0.520275) | 0.289587 / 0.579283 (-0.289696) | 0.270302 / 0.434364 (-0.164062) | 0.327231 / 0.540337 (-0.213107) | 0.422522 / 1.386936 (-0.964414) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005666 / 0.011353 (-0.005687) | 0.003914 / 0.011008 (-0.007094) | 0.050315 / 0.038508 (0.011807) | 0.032367 / 0.023109 (0.009257) | 0.271732 / 0.275898 (-0.004166) | 0.297248 / 0.323480 (-0.026231) | 0.005101 / 0.007986 (-0.002884) | 0.002882 / 0.004328 (-0.001447) | 0.049651 / 0.004250 (0.045401) | 0.043773 / 0.037052 (0.006721) | 0.288011 / 0.258489 (0.029522) | 0.311863 / 0.293841 (0.018023) | 0.029147 / 0.128546 (-0.099399) | 0.010722 / 0.075646 (-0.064925) | 0.058832 / 0.419271 (-0.360440) | 0.033092 / 0.043533 (-0.010441) | 0.274686 / 0.255139 (0.019547) | 0.294174 / 0.283200 (0.010975) | 0.019196 / 0.141683 (-0.122486) | 1.126615 / 1.452155 (-0.325540) | 1.193107 / 1.492716 (-0.299609) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097547 / 0.018006 (0.079541) | 0.316018 / 0.000490 (0.315529) | 0.000330 / 0.000200 (0.000130) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022336 / 0.037411 (-0.015076) | 0.077092 / 0.014526 (0.062566) | 0.088873 / 0.176557 (-0.087684) | 0.128517 / 0.737135 (-0.608619) | 0.094061 / 0.296338 (-0.202278) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300100 / 0.215209 (0.084891) | 2.893114 / 2.077655 (0.815460) | 1.570541 / 1.504120 (0.066421) | 1.453538 / 1.541195 (-0.087657) | 1.505325 / 1.468490 (0.036835) | 0.567955 / 4.584777 (-4.016822) | 2.458547 / 3.745712 (-1.287166) | 2.969181 / 5.269862 (-2.300680) | 1.850082 / 4.565676 (-2.715594) | 0.063811 / 0.424275 (-0.360464) | 0.005378 / 0.007607 (-0.002229) | 0.348219 / 0.226044 (0.122175) | 3.443986 / 2.268929 (1.175057) | 1.943005 / 55.444624 (-53.501620) | 1.686541 / 6.876477 (-5.189935) | 1.715552 / 2.142072 (-0.426520) | 0.641361 / 4.805227 (-4.163866) | 0.116652 / 6.500664 (-6.384012) | 0.042216 / 0.075469 (-0.033253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.020102 / 1.841788 (-0.821686) | 12.966127 / 8.074308 (4.891819) | 10.748397 / 10.191392 (0.557005) | 0.132601 / 0.680424 (-0.547823) | 0.016643 / 0.534201 (-0.517558) | 0.289422 / 0.579283 (-0.289861) | 0.275524 / 0.434364 (-0.158840) | 0.332835 / 0.540337 (-0.207503) | 0.427867 / 1.386936 (-0.959069) |\n\n</details>\n</details>\n\n\n"
] | 2024-04-16T17:41:27Z
| 2024-04-16T18:27:44Z
| 2024-04-16T18:17:29Z
|
COLLABORATOR
| null | null | null |
As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6817/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6817/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6817.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6817",
"merged_at": "2024-04-16T18:17:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6817.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6817"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6036
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6036/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6036/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6036/events
|
https://github.com/huggingface/datasets/pull/6036
| 1,805,138,898
|
PR_kwDODunzps5ViKc4
| 6,036
|
Deprecate search API
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005746 / 0.011353 (-0.005607) | 0.003461 / 0.011008 (-0.007548) | 0.078672 / 0.038508 (0.040164) | 0.056800 / 0.023109 (0.033691) | 0.312853 / 0.275898 (0.036955) | 0.346715 / 0.323480 (0.023235) | 0.004516 / 0.007986 (-0.003469) | 0.002872 / 0.004328 (-0.001457) | 0.061264 / 0.004250 (0.057013) | 0.046606 / 0.037052 (0.009553) | 0.320080 / 0.258489 (0.061591) | 0.350390 / 0.293841 (0.056550) | 0.026445 / 0.128546 (-0.102101) | 0.007710 / 0.075646 (-0.067936) | 0.259519 / 0.419271 (-0.159752) | 0.043935 / 0.043533 (0.000402) | 0.320015 / 0.255139 (0.064876) | 0.339799 / 0.283200 (0.056599) | 0.018638 / 0.141683 (-0.123044) | 1.463393 / 1.452155 (0.011239) | 1.496977 / 1.492716 (0.004261) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185175 / 0.018006 (0.167168) | 0.420734 / 0.000490 (0.420245) | 0.002569 / 0.000200 (0.002369) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022335 / 0.037411 (-0.015077) | 0.071686 / 0.014526 (0.057161) | 0.079906 / 0.176557 (-0.096650) | 0.140386 / 0.737135 (-0.596749) | 0.079712 / 0.296338 (-0.216627) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392643 / 0.215209 (0.177434) | 3.917934 / 2.077655 (1.840279) | 1.906808 / 1.504120 (0.402688) | 1.729564 / 1.541195 (0.188369) | 1.751533 / 1.468490 (0.283043) | 0.496810 / 4.584777 (-4.087967) | 3.047405 / 3.745712 (-0.698307) | 4.361766 / 5.269862 (-0.908095) | 2.660845 / 4.565676 (-1.904832) | 0.056951 / 0.424275 (-0.367324) | 0.006277 / 0.007607 (-0.001330) | 0.466357 / 0.226044 (0.240312) | 4.660457 / 2.268929 (2.391529) | 2.328590 / 55.444624 (-53.116034) | 1.986140 / 6.876477 (-4.890337) | 2.096182 / 2.142072 (-0.045891) | 0.581685 / 4.805227 (-4.223542) | 0.123643 / 6.500664 (-6.377021) | 0.060286 / 0.075469 (-0.015183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237024 / 1.841788 (-0.604763) | 17.778533 / 8.074308 (9.704225) | 13.202205 / 10.191392 (3.010813) | 0.141301 / 0.680424 (-0.539123) | 0.016453 / 0.534201 (-0.517748) | 0.329173 / 0.579283 (-0.250110) | 0.349945 / 0.434364 (-0.084419) | 0.375319 / 0.540337 (-0.165018) | 0.530394 / 1.386936 (-0.856542) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005863 / 0.011353 (-0.005489) | 0.003578 / 0.011008 (-0.007430) | 0.062719 / 0.038508 (0.024211) | 0.056192 / 0.023109 (0.033082) | 0.422812 / 0.275898 (0.146914) | 0.454316 / 0.323480 (0.130836) | 0.004446 / 0.007986 (-0.003540) | 0.002808 / 0.004328 (-0.001521) | 0.062819 / 0.004250 (0.058569) | 0.046243 / 0.037052 (0.009190) | 0.445858 / 0.258489 (0.187369) | 0.463750 / 0.293841 (0.169909) | 0.027504 / 0.128546 (-0.101042) | 0.007897 / 0.075646 (-0.067749) | 0.068248 / 0.419271 (-0.351024) | 0.041921 / 0.043533 (-0.001612) | 0.413314 / 0.255139 (0.158175) | 0.441619 / 0.283200 (0.158419) | 0.019246 / 0.141683 (-0.122437) | 1.457069 / 1.452155 (0.004914) | 1.524168 / 1.492716 (0.031452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237785 / 0.018006 (0.219779) | 0.418455 / 0.000490 (0.417965) | 0.002301 / 0.000200 (0.002101) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025630 / 0.037411 (-0.011781) | 0.076673 / 0.014526 (0.062147) | 0.084877 / 0.176557 (-0.091680) | 0.137528 / 0.737135 (-0.599607) | 0.085261 / 0.296338 (-0.211077) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419040 / 0.215209 (0.203831) | 4.183022 / 2.077655 (2.105368) | 2.157852 / 1.504120 (0.653732) | 1.966177 / 1.541195 (0.424982) | 2.019612 / 1.468490 (0.551122) | 0.497415 / 4.584777 (-4.087362) | 3.102873 / 3.745712 (-0.642839) | 4.526336 / 5.269862 (-0.743525) | 2.991503 / 4.565676 (-1.574174) | 0.057235 / 0.424275 (-0.367040) | 0.006735 / 0.007607 (-0.000872) | 0.498255 / 0.226044 (0.272211) | 4.957364 / 2.268929 (2.688435) | 2.632643 / 55.444624 (-52.811981) | 2.249788 / 6.876477 (-4.626688) | 2.289134 / 2.142072 (0.147062) | 0.583581 / 4.805227 (-4.221646) | 0.126046 / 6.500664 (-6.374618) | 0.062966 / 0.075469 (-0.012504) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295215 / 1.841788 (-0.546573) | 18.554020 / 8.074308 (10.479711) | 13.683273 / 10.191392 (3.491881) | 0.132266 / 0.680424 (-0.548158) | 0.016376 / 0.534201 (-0.517825) | 0.334495 / 0.579283 (-0.244788) | 0.347106 / 0.434364 (-0.087258) | 0.387531 / 0.540337 (-0.152806) | 0.525745 / 1.386936 (-0.861191) |\n\n</details>\n</details>\n\n\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6036). All of your documentation changes will be reflected on that endpoint.",
"I don't think `transformers` should have any dataset indexing code. So before deprecating I'd be in favor of finding a suitable replacement. Not sure about the stats of the RAG model that uses `datasets` indexing though",
"The RAG downloads stats are decent (over 20k downloads last month).\r\n\r\nI think it's suboptimal to maintain an API that only a single model uses. One option is to put this code into a separate lib. However, `langchain` and `docarray` already provide a unified interface to vector stores, so I don't see this as an impactful project. Considering how specific this model is, I think we should go with the simplest solution and combine an index with a dataset in Transformers (this wouldn't require too much code).",
"What about migrating to the [datasets-server](https://github.com/huggingface/datasets-server) search feature instead? Would make more sense from a product perspective ",
"I don't think it's a good idea:\r\n- using datasets-server would require to upload the data and to not control the indexing, whereas the current feature is about using a local index that you control\r\n- faiss indexes are vector indexes that are not supported by datasets-server, and they are also very customised. For instance RAG uses DPR embeddings and cosine similarity\r\n- FTS is only done for the first 5GB of data for now in datasets-server\r\n\r\nI think a better option would be to integrate with open source search tools such as docarray.\r\nAnd if we want to make the datasets-server search available in python we can build an integration in docarray and/or in huggingface_hub.",
"`llama_index` is another popular tool in this space.\r\n\r\n@lhoestq \r\n> I think a better option would be to integrate with open source search tools such as docarray.\r\nAnd if we want to make the datasets-server search available in python we can build an integration in docarray and/or in huggingface_hub.\r\n\r\nI don't think these integrations would be popular unless we integrate them with the Hub \"UI-wise\" (e.g., through a widget), so they can wait IMO. Also, FAISS supports `fsspec` already with the callback reader/writer, so this doesn't require a specific integration. ",
"After discussing it a bit with @lhoestq, do we need to deprecate the search API? While I understand it's imperfect, it looks like this will result in significant work to update it everywhere, so I'd favor keeping it until there's an obviously better alternative; this way we can focus on different things in the meantime.",
"FAISS/ES are simple to use (probably the main reason why they are so popular), so creating \"better alternatives\" is not easy - they usually add more complexity (as is the case here, `langchain`, etc.)\r\n\r\nSo, instead of waiting for better alternatives, IMO it makes more sense to wait for the RAG model to be deprecated in Transformers (less than 1,000 cumulated downloads over all checkpoints in the past 30 days) before deprecating this API here.\r\n\r\nIn the meantime, we should make it clear that the vector search API is in maintenance mode (no new features, etc.).\r\n\r\nHow does that sound?"
] | 2023-07-14T16:22:09Z
| 2023-09-07T16:44:32Z
| null |
COLLABORATOR
| null | null | null |
The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn't seem to be significant and is not integrated with the Hub. Since we have no plans/bandwidth to improve it and better alternatives such as `langchain` and `docarray` exist, I think it should be deprecated (and eventually removed).
If we decide to deprecate/remove it, the following usage instances need to be addressed:
* [Course](https://github.com/huggingface/course/blob/0018bb434204d9750a03592cb0d4e846093218d8/chapters/en/chapter5/6.mdx#L342 ) and [Blog](https://github.com/huggingface/blog/blob/4897c6f73d4492a0955ade503281711d01840e09/image-search-datasets.md?plain=1#L252) - calling the FAISS API directly should be OK in these instances as it's pretty simple to use for basic scenarios. Alternatively, we can use `langchain`, but this adds an extra dependency
* [Transformers](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/rag/retrieval_rag.py#L183) - we can use the FAISS API directly and store the index as a separate attribute (and instead of building the `wiki_dpr` index each time the dataset is generated, we can generate it once and push it to the Hub repo, and then read it from there
cc @huggingface/datasets @LysandreJik for the opinion
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6036/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6036/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6036",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6036"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5060
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5060/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5060/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5060/events
|
https://github.com/huggingface/datasets/issues/5060
| 1,395,382,940
|
I_kwDODunzps5TK9qc
| 5,060
|
Unable to Use Custom Dataset Locally
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zanussbaum",
"id": 33707069,
"login": "zanussbaum",
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zanussbaum",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read from a URL? Any links to code/documentation would be greatly appreciated, I'd love to learn more",
"`datasets` extends `open` in dataset scripts to work with URLs. The builtin `open` from python only works with local files.\r\n\r\nYou can find the extension here: https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/download/streaming_download_manager.py#L435-L451\r\n\r\nI think we can create a docs section dedicated to streaming to explain how this works",
"Closing this one - feel free to reopen if you have more questions"
] | 2022-10-03T21:55:16Z
| 2022-10-06T14:29:18Z
| 2022-10-06T14:29:17Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in the same folder or repository of the dataset script,
you can just pass the relative paths to the files instead of URLs.
```
Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs`
However, if I try to load the data using `load_dataset`, I get the following error
```
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
```
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True)
>>> t = dataset["train"]
>>> for item in t:
...... print(item)
...... break
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
for key, example in self._iter():
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
yield from ex_iterable
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
````
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5060/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5060/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6501
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6501/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6501/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6501/events
|
https://github.com/huggingface/datasets/issues/6501
| 2,043,377,240
|
I_kwDODunzps55y3ZY
| 6,501
|
OverflowError: value too large to convert to int32_t
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4",
"events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}",
"followers_url": "https://api.github.com/users/zhangfan-algo/followers",
"following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhangfan-algo",
"id": 47747764,
"login": "zhangfan-algo",
"node_id": "MDQ6VXNlcjQ3NzQ3NzY0",
"organizations_url": "https://api.github.com/users/zhangfan-algo/orgs",
"received_events_url": "https://api.github.com/users/zhangfan-algo/received_events",
"repos_url": "https://api.github.com/users/zhangfan-algo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhangfan-algo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-12-15T10:10:21Z
| 2023-12-15T10:10:21Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6501/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6501/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6043
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6043/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6043/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6043/events
|
https://github.com/huggingface/datasets/issues/6043
| 1,807,771,750
|
I_kwDODunzps5rwGhm
| 6,043
|
Compression kwargs have no effect when saving datasets as csv
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hello @exs-avianello, I have reproduced the bug successfully and have understood the problem. But I am confused regarding this part of the statement, \"`pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`\".\r\n\r\nCan you please elaborate on it?\r\n\r\nThanks!",
"Hi @aryanxk02 ! Sure, what I actually meant is that when passing a path-like `path_or_buf` here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/arrow_dataset.py#L4708-L4714 \r\n\r\nit gets converted to a file object behind the scenes here\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L92-L94\r\n\r\nand the eventual pandas `.to_csv()` calls that write to it always get `path_or_buf=None`, making pandas ignore the `compression` kwarg in the `to_csv_kwargs`\r\n\r\nhttps://github.com/huggingface/datasets/blob/14f6edd9222e577dccb962ed5338b79b73502fa5/src/datasets/io/csv.py#L107-L109",
"@exs-avianello When `path_or_buf` is set to None, the `to_csv()` method will return the CSV data as a string instead of saving it to a file. Hence the compression doesn't take place. I think setting `path_or_buf=self.path_or_buf` should work. What you say?"
] | 2023-07-17T13:19:21Z
| 2023-07-22T17:34:18Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, but no warnings are raised if relying on the defaults. This can lead to datasets secretly not getting compressed for users expecting the behaviour to match panda's `.to_csv()`, where the compression format is automatically inferred from the destination path suffix.
### Steps to reproduce the bug
```python
# dataset is not compressed (but at least a warning is emitted)
import datasets
dataset = datasets.load_dataset("rotten_tomatoes", split="train")
dataset.to_csv("uncompressed.csv")
print(os.path.getsize("uncompressed.csv")) # 1008607
dataset.to_csv("compressed.csv.gz", compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1})
print(os.path.getsize("compressed.csv.gz")) # 1008607
```
```shell
>>>
RuntimeWarning: compression has no effect when passing a non-binary object as input.
csv_str = batch.to_pandas().to_csv(
```
```python
# dataset is not compressed and no warnings are emitted
dataset.to_csv("compressed.csv.gz")
print(os.path.getsize("compressed.csv.gz")) # 1008607
# compare with
dataset.to_pandas().to_csv("pandas.csv.gz")
print(os.path.getsize("pandas.csv.gz")) # 418561
```
---
I think that this is because behind the scenes `pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`, but users that are providing a path-like to `datasets.Dataset.to_csv` are likely not to expect / know that - leading to a mismatch in their understanding of the expected behaviour of the `compression` kwarg.
### Expected behavior
The dataset to be saved as a compressed csv file when providing a `compression` kwarg, or when relying on the default `compression='infer'`
### Environment info
`datasets == 2.13.1`
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6043/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6043/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5359
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5359/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5359/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5359/events
|
https://github.com/huggingface/datasets/pull/5359
| 1,495,297,857
|
PR_kwDODunzps5FYHWm
| 5,359
|
Raise error if ClassLabel names is not python list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1475568?v=4",
"events_url": "https://api.github.com/users/freddyheppell/events{/privacy}",
"followers_url": "https://api.github.com/users/freddyheppell/followers",
"following_url": "https://api.github.com/users/freddyheppell/following{/other_user}",
"gists_url": "https://api.github.com/users/freddyheppell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/freddyheppell",
"id": 1475568,
"login": "freddyheppell",
"node_id": "MDQ6VXNlcjE0NzU1Njg=",
"organizations_url": "https://api.github.com/users/freddyheppell/orgs",
"received_events_url": "https://api.github.com/users/freddyheppell/received_events",
"repos_url": "https://api.github.com/users/freddyheppell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/freddyheppell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freddyheppell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/freddyheppell",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your proposed fix, @freddyheppell.\r\n\r\nCurrently the CI fails because in a test we pass a `tuple` instead of a `list`. I would say we should accept `tuple` as a valid input type as well...\r\n\r\nWhat about checking for `Sequence` instead?",
"Fixed that @albertvillanova, can you approve CI again please? Had some issues related to Pytorch .so files when running tests on my M1 mac, so wasn't able to test locally first. Have got them working on my desktop now though."
] | 2022-12-13T23:04:06Z
| 2022-12-22T16:35:49Z
| 2022-12-22T16:32:49Z
|
CONTRIBUTOR
| null | null | null |
Checks type of names provided to ClassLabel to avoid easy and hard to debug errors (closes #5332 - see for discussion)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5359/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5359/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5359.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5359",
"merged_at": "2022-12-22T16:32:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5359.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5359"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6653
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6653/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6653/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6653/events
|
https://github.com/huggingface/datasets/pull/6653
| 2,126,831,929
|
PR_kwDODunzps5mdv5S
| 6,653
|
Set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005076 / 0.011353 (-0.006277) | 0.003424 / 0.011008 (-0.007584) | 0.064195 / 0.038508 (0.025687) | 0.031742 / 0.023109 (0.008633) | 0.244774 / 0.275898 (-0.031124) | 0.268529 / 0.323480 (-0.054951) | 0.003970 / 0.007986 (-0.004016) | 0.002657 / 0.004328 (-0.001672) | 0.048847 / 0.004250 (0.044597) | 0.042196 / 0.037052 (0.005144) | 0.266044 / 0.258489 (0.007555) | 0.282400 / 0.293841 (-0.011441) | 0.027617 / 0.128546 (-0.100929) | 0.010400 / 0.075646 (-0.065246) | 0.205910 / 0.419271 (-0.213362) | 0.035820 / 0.043533 (-0.007713) | 0.247750 / 0.255139 (-0.007389) | 0.267318 / 0.283200 (-0.015882) | 0.017980 / 0.141683 (-0.123703) | 1.107263 / 1.452155 (-0.344892) | 1.173208 / 1.492716 (-0.319509) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095830 / 0.018006 (0.077824) | 0.293891 / 0.000490 (0.293401) | 0.000257 / 0.000200 (0.000057) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018138 / 0.037411 (-0.019273) | 0.061631 / 0.014526 (0.047105) | 0.073038 / 0.176557 (-0.103519) | 0.118317 / 0.737135 (-0.618818) | 0.074190 / 0.296338 (-0.222148) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287026 / 0.215209 (0.071817) | 2.786137 / 2.077655 (0.708482) | 1.472575 / 1.504120 (-0.031544) | 1.346919 / 1.541195 (-0.194276) | 1.388535 / 1.468490 (-0.079955) | 0.565731 / 4.584777 (-4.019046) | 2.382573 / 3.745712 (-1.363139) | 2.736926 / 5.269862 (-2.532935) | 1.716517 / 4.565676 (-2.849159) | 0.062168 / 0.424275 (-0.362108) | 0.004924 / 0.007607 (-0.002683) | 0.341897 / 0.226044 (0.115853) | 3.355715 / 2.268929 (1.086787) | 1.837014 / 55.444624 (-53.607611) | 1.532063 / 6.876477 (-5.344414) | 1.548193 / 2.142072 (-0.593880) | 0.634995 / 4.805227 (-4.170232) | 0.115622 / 6.500664 (-6.385042) | 0.042252 / 0.075469 (-0.033217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.970713 / 1.841788 (-0.871075) | 11.727576 / 8.074308 (3.653268) | 9.806524 / 10.191392 (-0.384868) | 0.127622 / 0.680424 (-0.552802) | 0.014140 / 0.534201 (-0.520061) | 0.286832 / 0.579283 (-0.292451) | 0.266556 / 0.434364 (-0.167808) | 0.325940 / 0.540337 (-0.214398) | 0.421839 / 1.386936 (-0.965097) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005495 / 0.011353 (-0.005858) | 0.003676 / 0.011008 (-0.007332) | 0.054361 / 0.038508 (0.015853) | 0.030743 / 0.023109 (0.007633) | 0.277200 / 0.275898 (0.001302) | 0.313459 / 0.323480 (-0.010021) | 0.004316 / 0.007986 (-0.003670) | 0.002750 / 0.004328 (-0.001578) | 0.049491 / 0.004250 (0.045241) | 0.044268 / 0.037052 (0.007215) | 0.292529 / 0.258489 (0.034039) | 0.326524 / 0.293841 (0.032683) | 0.048040 / 0.128546 (-0.080507) | 0.010390 / 0.075646 (-0.065256) | 0.058459 / 0.419271 (-0.360813) | 0.033765 / 0.043533 (-0.009768) | 0.276003 / 0.255139 (0.020864) | 0.297299 / 0.283200 (0.014099) | 0.018532 / 0.141683 (-0.123151) | 1.157639 / 1.452155 (-0.294515) | 1.220492 / 1.492716 (-0.272225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093903 / 0.018006 (0.075897) | 0.303005 / 0.000490 (0.302515) | 0.000224 / 0.000200 (0.000024) | 0.000051 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021580 / 0.037411 (-0.015831) | 0.076176 / 0.014526 (0.061650) | 0.086998 / 0.176557 (-0.089558) | 0.124148 / 0.737135 (-0.612987) | 0.088613 / 0.296338 (-0.207725) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300623 / 0.215209 (0.085414) | 2.911876 / 2.077655 (0.834221) | 1.588398 / 1.504120 (0.084278) | 1.471251 / 1.541195 (-0.069944) | 1.505528 / 1.468490 (0.037038) | 0.570635 / 4.584777 (-4.014142) | 2.485769 / 3.745712 (-1.259943) | 2.785355 / 5.269862 (-2.484507) | 1.752944 / 4.565676 (-2.812732) | 0.063146 / 0.424275 (-0.361129) | 0.004980 / 0.007607 (-0.002627) | 0.354577 / 0.226044 (0.128532) | 3.477181 / 2.268929 (1.208253) | 1.951906 / 55.444624 (-53.492718) | 1.677169 / 6.876477 (-5.199307) | 1.686338 / 2.142072 (-0.455735) | 0.637156 / 4.805227 (-4.168071) | 0.117732 / 6.500664 (-6.382932) | 0.041091 / 0.075469 (-0.034378) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.010071 / 1.841788 (-0.831717) | 12.172242 / 8.074308 (4.097934) | 10.422811 / 10.191392 (0.231419) | 0.137185 / 0.680424 (-0.543239) | 0.014643 / 0.534201 (-0.519558) | 0.287248 / 0.579283 (-0.292035) | 0.272779 / 0.434364 (-0.161585) | 0.331761 / 0.540337 (-0.208576) | 0.417266 / 1.386936 (-0.969670) |\n\n</details>\n</details>\n\n\n"
] | 2024-02-09T10:12:02Z
| 2024-02-09T10:18:20Z
| 2024-02-09T10:12:12Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6653/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6653/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6653",
"merged_at": "2024-02-09T10:12:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6653"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5354
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5354/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5354/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5354/events
|
https://github.com/huggingface/datasets/issues/5354
| 1,492,174,125
|
I_kwDODunzps5Y8MUt
| 5,354
|
Consider using "Sequence" instead of "List"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15568078?v=4",
"events_url": "https://api.github.com/users/tranhd95/events{/privacy}",
"followers_url": "https://api.github.com/users/tranhd95/followers",
"following_url": "https://api.github.com/users/tranhd95/following{/other_user}",
"gists_url": "https://api.github.com/users/tranhd95/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tranhd95",
"id": 15568078,
"login": "tranhd95",
"node_id": "MDQ6VXNlcjE1NTY4MDc4",
"organizations_url": "https://api.github.com/users/tranhd95/orgs",
"received_events_url": "https://api.github.com/users/tranhd95/received_events",
"repos_url": "https://api.github.com/users/tranhd95/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tranhd95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tranhd95/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tranhd95",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avinashsai",
"id": 22453634,
"login": "avinashsai",
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avinashsai",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4",
"events_url": "https://api.github.com/users/avinashsai/events{/privacy}",
"followers_url": "https://api.github.com/users/avinashsai/followers",
"following_url": "https://api.github.com/users/avinashsai/following{/other_user}",
"gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avinashsai",
"id": 22453634,
"login": "avinashsai",
"node_id": "MDQ6VXNlcjIyNDUzNjM0",
"organizations_url": "https://api.github.com/users/avinashsai/orgs",
"received_events_url": "https://api.github.com/users/avinashsai/received_events",
"repos_url": "https://api.github.com/users/avinashsai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avinashsai",
"user_view_type": "public"
}
] | null |
[
"Hi! Linking a comment to provide more info on the issue: https://stackoverflow.com/a/39458225. This means we should replace all (most of) the occurrences of `List` with `Sequence` in function signatures.\r\n\r\n@tranhd95 Would you be interested in submitting a PR?",
"Hi all! I tried to reproduce this issue and didn't work for me. Also in your example i noticed that the variables have different names: `list_of_filenames` and `list_of_files`, could this be related to that?\r\n```python\r\n#I found random data in parquet format:\r\n!wget \"https://github.com/Teradata/kylo/raw/master/samples/sample-data/parquet/userdata1.parquet\"\r\n!wget \"https://github.com/Teradata/kylo/raw/master/samples/sample-data/parquet/userdata2.parquet\"\r\n\r\n#Then i try reproduce\r\nlist_of_files = [\"userdata1.parquet\", \"userdata2.parquet\"]\r\nds = Dataset.from_parquet(list_of_files)\r\n```\r\n**My output:**\r\n```python\r\nWARNING:datasets.builder:Using custom data configuration default-e287d097dc54e046\r\nDownloading and preparing dataset parquet/default to /root/.cache/huggingface/datasets/parquet/default-e287d097dc54e046/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...\r\nDownloading data files: 100%\r\n1/1 [00:00<00:00, 40.38it/s]\r\nExtracting data files: 100%\r\n1/1 [00:00<00:00, 23.43it/s]\r\nDataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/default-e287d097dc54e046/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec. Subsequent calls will reuse this data.\r\n```\r\nP.S. This is my first experience with open source. So do not judge strictly if I do not understand something)",
"@dantema There is indeed a typo in variable names. Nevertheless, I'm sorry if I was not clear but the output is from `mypy` type checker. You can run the code snippet without issues. The problem is with the type checking.",
"However, I found out that the type annotation is actually misleading. The [`from_parquet`](https://github.com/huggingface/datasets/blob/5ef1ab1cc06c2b7a574bf2df454cd9fcb071ccb2/src/datasets/arrow_dataset.py#L1039) method should also accept list of [`PathLike`](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/typing.py#L8) objects which includes [`os.PathLike`](https://docs.python.org/3/library/os.html#os.PathLike). But if I would ran the code snippet below, an exception is thrown.\r\n\r\n**Code**\r\n```py\r\nfrom pathlib import Path\r\n\r\nlist_of_filenames = [Path(\"foo.parquet\"), Path(\"bar.parquet\")]\r\nds = Dataset.from_parquet(list_of_filenames)\r\n```\r\n**Output**\r\n```py\r\n[/usr/local/lib/python3.8/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in from_parquet(path_or_paths, split, features, cache_dir, keep_in_memory, columns, **kwargs)\r\n 1071 from .io.parquet import ParquetDatasetReader\r\n 1072 \r\n-> 1073 return ParquetDatasetReader(\r\n 1074 path_or_paths,\r\n 1075 split=split,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/io/parquet.py](https://localhost:8080/#) in __init__(self, path_or_paths, split, features, cache_dir, keep_in_memory, streaming, **kwargs)\r\n 35 path_or_paths = path_or_paths if isinstance(path_or_paths, dict) else {self.split: path_or_paths}\r\n 36 hash = _PACKAGED_DATASETS_MODULES[\"parquet\"][1]\r\n---> 37 self.builder = Parquet(\r\n 38 cache_dir=cache_dir,\r\n 39 data_files=path_or_paths,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in __init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)\r\n 298 \r\n 299 if data_files is not None and not isinstance(data_files, DataFilesDict):\r\n--> 300 data_files = DataFilesDict.from_local_or_remote(\r\n 301 sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token\r\n 302 )\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)\r\n 794 for key, patterns_for_key in patterns.items():\r\n 795 out[key] = (\r\n--> 796 DataFilesList.from_local_or_remote(\r\n 797 patterns_for_key,\r\n 798 base_path=base_path,\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in from_local_or_remote(cls, patterns, base_path, allowed_extensions, use_auth_token)\r\n 762 ) -> \"DataFilesList\":\r\n 763 base_path = base_path if base_path is not None else str(Path().resolve())\r\n--> 764 data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n 765 origin_metadata = _get_origin_metadata_locally_or_by_urls(data_files, use_auth_token=use_auth_token)\r\n 766 return cls(data_files, origin_metadata)\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/data_files.py](https://localhost:8080/#) in resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n 357 data_files = []\r\n 358 for pattern in patterns:\r\n--> 359 if is_remote_url(pattern):\r\n 360 data_files.append(Url(pattern))\r\n 361 else:\r\n\r\n[/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in is_remote_url(url_or_filename)\r\n 62 \r\n 63 def is_remote_url(url_or_filename: str) -> bool:\r\n---> 64 parsed = urlparse(url_or_filename)\r\n 65 return parsed.scheme in (\"http\", \"https\", \"s3\", \"gs\", \"hdfs\", \"ftp\")\r\n 66 \r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in urlparse(url, scheme, allow_fragments)\r\n 373 Note that we don't break the components up in smaller bits\r\n 374 (e.g. netloc is a single string) and we don't expand % escapes.\"\"\"\r\n--> 375 url, scheme, _coerce_result = _coerce_args(url, scheme)\r\n 376 splitresult = urlsplit(url, scheme, allow_fragments)\r\n 377 scheme, netloc, url, query, fragment = splitresult\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in _coerce_args(*args)\r\n 125 if str_input:\r\n 126 return args + (_noop,)\r\n--> 127 return _decode_args(args) + (_encode_result,)\r\n 128 \r\n 129 # Result objects are more helpful than simple tuples\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in _decode_args(args, encoding, errors)\r\n 109 def _decode_args(args, encoding=_implicit_encoding,\r\n 110 errors=_implicit_errors):\r\n--> 111 return tuple(x.decode(encoding, errors) if x else '' for x in args)\r\n 112 \r\n 113 def _coerce_args(*args):\r\n\r\n[/usr/lib/python3.8/urllib/parse.py](https://localhost:8080/#) in <genexpr>(.0)\r\n 109 def _decode_args(args, encoding=_implicit_encoding,\r\n 110 errors=_implicit_errors):\r\n--> 111 return tuple(x.decode(encoding, errors) if x else '' for x in args)\r\n 112 \r\n 113 def _coerce_args(*args):\r\n\r\nAttributeError: 'PosixPath' object has no attribute 'decode'\r\n```\r\n\r\n@mariosasko Should I create a new issue? ",
"@mariosasko I would like to take this issue up. ",
"@avinashsai Hi, I've assigned you the issue.\r\n\r\n@tranhd95 Yes, feel free to report this in a new issue.",
"@avinashsai Are you still working on this? If not I would like to give it a try.",
"@mariosasko I would like to take this issue up!",
"Hi @tranhd95 @mariosasko ,I hope you all are doing well.\r\n\r\nI am interested in this issue, is this still open and unresolved ?\r\n\r\nThanks and Regards",
"@mariosasko I would like to take this issue up."
] | 2022-12-12T15:39:45Z
| 2024-10-05T14:38:44Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Hi, please consider using `Sequence` type annotation instead of `List` in function arguments such as in [`Dataset.from_parquet()`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L1088). It leads to type checking errors, see below.
**How to reproduce**
```py
list_of_filenames = ["foo.parquet", "bar.parquet"]
ds = Dataset.from_parquet(list_of_filenames)
```
**Expected mypy output:**
```
Success: no issues found
```
**Actual mypy output:**
```py
test.py:19: error: Argument 1 to "from_parquet" of "Dataset" has incompatible type "List[str]"; expected "Union[Union[str, bytes, PathLike[Any]], List[Union[str, bytes, PathLike[Any]]]]" [arg-type]
test.py:19: note: "List" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance
test.py:19: note: Consider using "Sequence" instead, which is covariant
```
**Env:** mypy 0.991, Python 3.10.0, datasets 2.7.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5354/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5354/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5759
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5759/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5759/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5759/events
|
https://github.com/huggingface/datasets/issues/5759
| 1,669,977,848
|
I_kwDODunzps5jidb4
| 5,759
|
Can I load in list of list of dict format?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4",
"events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}",
"followers_url": "https://api.github.com/users/LZY-the-boys/followers",
"following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}",
"gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LZY-the-boys",
"id": 72137647,
"login": "LZY-the-boys",
"node_id": "MDQ6VXNlcjcyMTM3NjQ3",
"organizations_url": "https://api.github.com/users/LZY-the-boys/orgs",
"received_events_url": "https://api.github.com/users/LZY-the-boys/received_events",
"repos_url": "https://api.github.com/users/LZY-the-boys/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LZY-the-boys",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is composed of one JSON object, where the names are the names of the columns, and the values are the values for the row-column pair."
] | 2023-04-16T13:50:14Z
| 2023-04-19T12:04:36Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
my jsonl dataset has following format:
```
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
```
I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises
```
File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json
).read()
File "site-packages/datasets/io/json.py", line 59, in read
self.builder.download_and_prepare(
File "site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "site-packages/datasets/builder.py", line 1749, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Motivation
I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format
### Your contribution
PR
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5759/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5759/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7238
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7238/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7238/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7238/events
|
https://github.com/huggingface/datasets/issues/7238
| 2,598,409,993
|
I_kwDODunzps6a4JcJ
| 7,238
|
incompatibily issue when using load_dataset with datasets==3.0.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/74985234?v=4",
"events_url": "https://api.github.com/users/jupiterMJM/events{/privacy}",
"followers_url": "https://api.github.com/users/jupiterMJM/followers",
"following_url": "https://api.github.com/users/jupiterMJM/following{/other_user}",
"gists_url": "https://api.github.com/users/jupiterMJM/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jupiterMJM",
"id": 74985234,
"login": "jupiterMJM",
"node_id": "MDQ6VXNlcjc0OTg1MjM0",
"organizations_url": "https://api.github.com/users/jupiterMJM/orgs",
"received_events_url": "https://api.github.com/users/jupiterMJM/received_events",
"repos_url": "https://api.github.com/users/jupiterMJM/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jupiterMJM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jupiterMJM/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jupiterMJM",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! I'm also getting the same issue - have you been able to find a solution to this? ",
"From what I remember, I stayed at the \"downgraded\" version of dataset (2.21.0)"
] | 2024-10-18T21:25:23Z
| 2024-12-09T09:49:32Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
There is a bug when using load_dataset with dataset version at 3.0.1 .
Please see below in the "steps to reproduce the bug".
To resolve the bug, I had to downgrade to version 2.21.0
OS: Ubuntu 24 (AWS instance)
Python: same bug under 3.12 and 3.10
The error I had was:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/load.py", line 2096, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
super()._download_and_prepare(
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 977, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_0/cb17afd34f5799f97e8f48398748f83006335b702bd785f9880797838d541b81/common_voice_6_0.py", line 159, in _split_generators
archive_path = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template))
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_manager.py", line 150, in download
download_config = self.download_config.copy()
File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_config.py", line 73, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
TypeError: DownloadConfig.__init__() got an unexpected keyword argument 'ignore_url_params'
### Steps to reproduce the bug
1. install dataset with ```pip install datasets --upgrade```
2. launch python; from datasets import loaad_dataset
3. run load_dataset("mozilla-foundation/common_voice_6_0")
4. exit python
5. uninstall datasets; then ```pip install datasets==2.21.0```
6. launch python; from datasets import loaad_dataset
7. run load_dataset("mozilla-foundation/common_voice_6_0")
8. Everything runs great now
### Expected behavior
Be able to download a dataset without error
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.0.1
- Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.26.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7238/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7238/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6764
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6764/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6764/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6764/events
|
https://github.com/huggingface/datasets/issues/6764
| 2,215,767,119
|
I_kwDODunzps6EEexP
| 6,764
|
load_dataset can't work with symbolic links
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13640533?v=4",
"events_url": "https://api.github.com/users/VladimirVincan/events{/privacy}",
"followers_url": "https://api.github.com/users/VladimirVincan/followers",
"following_url": "https://api.github.com/users/VladimirVincan/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirVincan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VladimirVincan",
"id": 13640533,
"login": "VladimirVincan",
"node_id": "MDQ6VXNlcjEzNjQwNTMz",
"organizations_url": "https://api.github.com/users/VladimirVincan/orgs",
"received_events_url": "https://api.github.com/users/VladimirVincan/received_events",
"repos_url": "https://api.github.com/users/VladimirVincan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VladimirVincan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirVincan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VladimirVincan",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-03-29T17:49:28Z
| 2024-03-29T17:52:27Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
βββ example_dataset/
β βββ data/
β β βββ train/
β β β βββ file0
β β β βββ file1
β β βββ dev/
β β β βββ file2
β β β βββ file3
β βββ metadata.csv
while this dataset can't:
βββ example_dataset_symlink/
β βββ data/
β β βββ train/
β β β βββ sym0 -> file0
β β β βββ sym1 -> file1
β β βββ dev/
β β β βββ sym2 -> file2
β β β βββ sym3 -> file3
β βββ metadata.csv
I have created an example dataset in order to reproduce the problem:
1. Unzip `example_dataset.zip`.
2. Run `no_symlink.sh`. Training should start without issues.
3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.
[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)
### Motivation
I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.
Advantages of this approach:
- It would leave a smaller memory footprint on the hard drive
- Creating smaller datasets would be much faster
### Your contribution
I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input.
| null |
{
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 3,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6764/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6764/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5191
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5191/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5191/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5191/events
|
https://github.com/huggingface/datasets/pull/5191
| 1,433,191,658
|
PR_kwDODunzps5CD0Qp
| 5,191
|
Make torch.Tensor and spacy models cacheable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-02T13:56:18Z
| 2022-11-02T17:20:48Z
| 2022-11-02T17:18:42Z
|
COLLABORATOR
| null | null | null |
Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models.
Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/3178
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5191/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5191/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5191",
"merged_at": "2022-11-02T17:18:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5191"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7180
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7180/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7180/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7180/events
|
https://github.com/huggingface/datasets/issues/7180
| 2,554,244,750
|
I_kwDODunzps6YPq6O
| 7,180
|
Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4",
"events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}",
"followers_url": "https://api.github.com/users/iamwangyabin/followers",
"following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}",
"gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iamwangyabin",
"id": 38123329,
"login": "iamwangyabin",
"node_id": "MDQ6VXNlcjM4MTIzMzI5",
"organizations_url": "https://api.github.com/users/iamwangyabin/orgs",
"received_events_url": "https://api.github.com/users/iamwangyabin/received_events",
"repos_url": "https://api.github.com/users/iamwangyabin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iamwangyabin",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data will stay in RAM, and get paged out once your system needs RAM for something else (no OOM).\r\n\r\nrelated: https://github.com/huggingface/datasets/issues/4883"
] | 2024-09-28T14:00:47Z
| 2024-09-30T12:07:56Z
| 2024-09-30T12:07:56Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.
### Steps to reproduce the bug
Steps to reproduce:
Create a PyTorch Dataset wrapper for 'nebula/cc12m':
````
from torch.utils.data import Dataset
from tqdm import tqdm
from datasets import load_dataset
from torchvision import transforms
Image.MAX_IMAGE_PIXELS = None
class CC12M(Dataset):
def __init__(self, path_or_name='nebula/cc12m', split='train', transform=None, single_caption=True):
self.raw_dataset = load_dataset(path_or_name)[split]
if transform is None:
self.transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711]
)
])
else:
self.transform = transforms.Compose(transform)
self.single_caption = single_caption
self.length = len(self.raw_dataset)
def __len__(self):
return self.length
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
# del item # Uncomment this line to prevent the memory leak
return image, caption
````
Iterate through the dataset without the del item line in __getitem__.
Observe RAM usage increasing constantly.
Add del item at the end of __getitem__:
```
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
del item # This line prevents the memory leak
return image, caption
```
Iterate through the dataset again and observe that RAM usage remains stable.
### Expected behavior
Expected behavior:
RAM usage should remain stable during iteration without needing to explicitly delete items.
Actual behavior:
RAM usage constantly increases unless items are explicitly deleted after use
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-4.18.0-513.5.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4",
"events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}",
"followers_url": "https://api.github.com/users/iamwangyabin/followers",
"following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}",
"gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iamwangyabin",
"id": 38123329,
"login": "iamwangyabin",
"node_id": "MDQ6VXNlcjM4MTIzMzI5",
"organizations_url": "https://api.github.com/users/iamwangyabin/orgs",
"received_events_url": "https://api.github.com/users/iamwangyabin/received_events",
"repos_url": "https://api.github.com/users/iamwangyabin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iamwangyabin",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7180/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7180/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6439
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6439/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6439/events
|
https://github.com/huggingface/datasets/issues/6439
| 2,002,916,514
|
I_kwDODunzps53YhSi
| 6,439
|
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AntreasAntoniou",
"id": 10792502,
"login": "AntreasAntoniou",
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AntreasAntoniou",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2023-11-20T20:07:23Z
| 2023-11-20T20:07:37Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am working with a dataset I am trying to publish.
The path is Antreas/TALI.
It's a fairly large dataset, and contains images, video, audio and text.
I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process.
With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths.
Find the script I am using below:
```python
import multiprocessing as mp
import pathlib
from typing import Optional
import datasets
from rich import print
from tqdm import tqdm
def download_dataset_via_hub(
dataset_name: str,
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
):
import huggingface_hub as hf_hub
download_folder = hf_hub.snapshot_download(
repo_id=dataset_name,
repo_type="dataset",
cache_dir=dataset_download_path,
resume_download=True,
max_workers=num_download_workers,
ignore_patterns=[],
)
return pathlib.Path(download_folder) / "data"
def load_dataset_via_hub(
dataset_download_path: pathlib.Path,
num_download_workers: int = mp.cpu_count(),
dataset_name: Optional[str] = None,
):
from dataclasses import dataclass, field
from datasets import ClassLabel, Features, Image, Sequence, Value
dataset_path = download_dataset_via_hub(
dataset_download_path=dataset_download_path,
num_download_workers=num_download_workers,
dataset_name=dataset_name,
)
# Building a list of file paths for validation set
train_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "train" in file.as_posix()
]
val_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "val" in file.as_posix()
]
test_files = [
file.as_posix()
for file in pathlib.Path(dataset_path).glob("*.parquet")
if "test" in file.as_posix()
]
print(
f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set"
)
data_files = {
"test": test_files,
"val": val_files,
"train": train_files,
}
features = Features(
{
"image": Image(
decode=True
), # Set `decode=True` if you want to decode the images, otherwise `decode=False`
"image_url": Value("string"),
"item_idx": Value("int64"),
"wit_features": Sequence(
{
"attribution_passes_lang_id": Value("bool"),
"caption_alt_text_description": Value("string"),
"caption_reference_description": Value("string"),
"caption_title_and_reference_description": Value("string"),
"context_page_description": Value("string"),
"context_section_description": Value("string"),
"hierarchical_section_title": Value("string"),
"is_main_image": Value("bool"),
"language": Value("string"),
"page_changed_recently": Value("bool"),
"page_title": Value("string"),
"page_url": Value("string"),
"section_title": Value("string"),
}
),
"wit_idx": Value("int64"),
"youtube_title_text": Value("string"),
"youtube_description_text": Value("string"),
"youtube_video_content": Value("binary"),
"youtube_video_starting_time": Value("string"),
"youtube_subtitle_text": Value("string"),
"youtube_video_size": Value("int64"),
"youtube_video_file_path": Value("string"),
}
)
dataset = datasets.load_dataset(
"parquet" if dataset_name is None else dataset_name,
data_files=data_files,
features=features,
num_proc=1,
cache_dir=dataset_download_path / "cache",
)
return dataset
if __name__ == "__main__":
dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/")
dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[
"test"
]
for sample in tqdm(dataset):
print(list(sample.keys()))
```
Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start!
### Steps to reproduce the bug
1. Run the code I provided to get a sense of how fast snapshot + manual is
2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP.
3. You should now have an appreciation of how long these things take.
### Expected behavior
The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6439/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4744
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4744/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4744/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4744/events
|
https://github.com/huggingface/datasets/issues/4744
| 1,317,822,345
|
I_kwDODunzps5OjF-J
| 4,744
|
Remove instructions to generate dummy data from our docs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
] | null |
[
"Note that for me personally, conceptually all the dummy data (even for \"canonical\" datasets) should be superseded by `datasets-server`, which performs some kind of CI/CD of datasets (including the canonical ones)",
"I totally agree: next step should be rethinking if dummy data makes sense for canonical datasets (once we have datasets-server) and eventually remove it.\r\n\r\nBut for now, we could at least start by removing the indication to generate dummy data from our docs."
] | 2022-07-26T07:32:58Z
| 2022-08-02T23:50:30Z
| 2022-08-02T23:50:30Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI test requiring dummy data
- there are no instructions on how they can test their dataset locally using the dummy data
- the generation of the dummy data assumes our GitHub directory structure:
- the dummy data will be generated under `./datasets/<dataset_name>/dummy` even if locally there is no `./datasets` directory (which is the usual case). See issue:
- #4742
CC: @stevhliu
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4744/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4744/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4586
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4586/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4586/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4586/events
|
https://github.com/huggingface/datasets/pull/4586
| 1,287,105,636
|
PR_kwDODunzps46e9xB
| 4,586
|
Host pn_summary data on the Hub instead of Google Drive
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-06-28T10:05:05Z
| 2022-06-28T14:52:56Z
| 2022-06-28T14:42:03Z
|
MEMBER
| null | null | null |
Fix #4581.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4586/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4586/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4586.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4586",
"merged_at": "2022-06-28T14:42:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4586.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4586"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5436
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5436/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5436/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5436/events
|
https://github.com/huggingface/datasets/pull/5436
| 1,536,633,173
|
PR_kwDODunzps5Hjh4v
| 5,436
|
Revert container image pin in CI benchmarks
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11387611?v=4",
"events_url": "https://api.github.com/users/0x2b3bfa0/events{/privacy}",
"followers_url": "https://api.github.com/users/0x2b3bfa0/followers",
"following_url": "https://api.github.com/users/0x2b3bfa0/following{/other_user}",
"gists_url": "https://api.github.com/users/0x2b3bfa0/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/0x2b3bfa0",
"id": 11387611,
"login": "0x2b3bfa0",
"node_id": "MDQ6VXNlcjExMzg3NjEx",
"organizations_url": "https://api.github.com/users/0x2b3bfa0/orgs",
"received_events_url": "https://api.github.com/users/0x2b3bfa0/received_events",
"repos_url": "https://api.github.com/users/0x2b3bfa0/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/0x2b3bfa0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0x2b3bfa0/subscriptions",
"type": "User",
"url": "https://api.github.com/users/0x2b3bfa0",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.013736 / 0.011353 (0.002383) | 0.006253 / 0.011008 (-0.004755) | 0.127076 / 0.038508 (0.088568) | 0.040997 / 0.023109 (0.017888) | 0.394744 / 0.275898 (0.118846) | 0.454285 / 0.323480 (0.130805) | 0.009864 / 0.007986 (0.001878) | 0.005093 / 0.004328 (0.000765) | 0.098714 / 0.004250 (0.094464) | 0.044308 / 0.037052 (0.007255) | 0.421951 / 0.258489 (0.163462) | 0.462280 / 0.293841 (0.168439) | 0.059979 / 0.128546 (-0.068567) | 0.020607 / 0.075646 (-0.055039) | 0.443593 / 0.419271 (0.024321) | 0.062332 / 0.043533 (0.018799) | 0.411335 / 0.255139 (0.156196) | 0.426524 / 0.283200 (0.143324) | 0.118233 / 0.141683 (-0.023450) | 1.877681 / 1.452155 (0.425527) | 1.865271 / 1.492716 (0.372555) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234791 / 0.018006 (0.216784) | 0.557322 / 0.000490 (0.556833) | 0.000528 / 0.000200 (0.000328) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030260 / 0.037411 (-0.007151) | 0.122594 / 0.014526 (0.108068) | 0.142142 / 0.176557 (-0.034414) | 0.197098 / 0.737135 (-0.540037) | 0.150978 / 0.296338 (-0.145360) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.622644 / 0.215209 (0.407435) | 6.320078 / 2.077655 (4.242423) | 2.552755 / 1.504120 (1.048635) | 2.188647 / 1.541195 (0.647453) | 2.226602 / 1.468490 (0.758112) | 1.288083 / 4.584777 (-3.296694) | 5.624143 / 3.745712 (1.878431) | 3.208382 / 5.269862 (-2.061480) | 2.115222 / 4.565676 (-2.450455) | 0.146420 / 0.424275 (-0.277856) | 0.014464 / 0.007607 (0.006857) | 0.816470 / 0.226044 (0.590425) | 7.984049 / 2.268929 (5.715120) | 3.364942 / 55.444624 (-52.079682) | 2.552306 / 6.876477 (-4.324171) | 2.664575 / 2.142072 (0.522503) | 1.556177 / 4.805227 (-3.249050) | 0.263389 / 6.500664 (-6.237275) | 0.076861 / 0.075469 (0.001391) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.553734 / 1.841788 (-0.288054) | 18.365029 / 8.074308 (10.290721) | 20.993993 / 10.191392 (10.802601) | 0.235642 / 0.680424 (-0.444782) | 0.047084 / 0.534201 (-0.487117) | 0.555610 / 0.579283 (-0.023673) | 0.659413 / 0.434364 (0.225049) | 0.639284 / 0.540337 (0.098947) | 0.756317 / 1.386936 (-0.630620) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014709 / 0.011353 (0.003356) | 0.006673 / 0.011008 (-0.004335) | 0.133718 / 0.038508 (0.095210) | 0.035699 / 0.023109 (0.012590) | 0.459089 / 0.275898 (0.183191) | 0.538071 / 0.323480 (0.214591) | 0.007376 / 0.007986 (-0.000610) | 0.004688 / 0.004328 (0.000360) | 0.104909 / 0.004250 (0.100659) | 0.064942 / 0.037052 (0.027890) | 0.466158 / 0.258489 (0.207669) | 0.566100 / 0.293841 (0.272259) | 0.057368 / 0.128546 (-0.071178) | 0.021572 / 0.075646 (-0.054075) | 0.413826 / 0.419271 (-0.005446) | 0.079543 / 0.043533 (0.036010) | 0.493313 / 0.255139 (0.238174) | 0.517787 / 0.283200 (0.234587) | 0.119836 / 0.141683 (-0.021847) | 1.833956 / 1.452155 (0.381801) | 2.003288 / 1.492716 (0.510572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.276013 / 0.018006 (0.258007) | 0.549194 / 0.000490 (0.548704) | 0.010939 / 0.000200 (0.010739) | 0.000129 / 0.000054 (0.000075) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034983 / 0.037411 (-0.002428) | 0.131576 / 0.014526 (0.117050) | 0.140651 / 0.176557 (-0.035906) | 0.186455 / 0.737135 (-0.550681) | 0.146309 / 0.296338 (-0.150029) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675973 / 0.215209 (0.460763) | 6.821862 / 2.077655 (4.744208) | 3.090307 / 1.504120 (1.586187) | 2.710679 / 1.541195 (1.169484) | 2.891577 / 1.468490 (1.423087) | 1.306160 / 4.584777 (-3.278617) | 5.629763 / 3.745712 (1.884051) | 4.662578 / 5.269862 (-0.607283) | 2.670195 / 4.565676 (-1.895482) | 0.153867 / 0.424275 (-0.270408) | 0.016028 / 0.007607 (0.008421) | 0.878702 / 0.226044 (0.652658) | 8.801612 / 2.268929 (6.532683) | 4.005520 / 55.444624 (-51.439104) | 3.124755 / 6.876477 (-3.751721) | 3.382132 / 2.142072 (1.240060) | 1.525951 / 4.805227 (-3.279277) | 0.263350 / 6.500664 (-6.237315) | 0.079285 / 0.075469 (0.003815) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.647591 / 1.841788 (-0.194197) | 18.281646 / 8.074308 (10.207338) | 21.072142 / 10.191392 (10.880750) | 0.232236 / 0.680424 (-0.448188) | 0.026126 / 0.534201 (-0.508075) | 0.546926 / 0.579283 (-0.032357) | 0.634496 / 0.434364 (0.200132) | 0.604345 / 0.540337 (0.064007) | 0.730159 / 1.386936 (-0.656777) |\n\n</details>\n</details>\n\n\n"
] | 2023-01-17T15:59:50Z
| 2023-01-18T09:05:49Z
| 2023-01-18T06:29:06Z
|
CONTRIBUTOR
| null | null | null |
Closes #5433, reverts #5432, and also:
* Uses [ghcr.io container images](https://cml.dev/doc/self-hosted-runners/#docker-images) for extra speed
* Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/))
* Follows the new naming convention for environment variables introduced with [iterative/cml#1272](https://github.com/iterative/cml/pull/1272)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5436/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5436/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5436",
"merged_at": "2023-01-18T06:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5436"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7537
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7537/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7537/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7537/events
|
https://github.com/huggingface/datasets/issues/7537
| 3,018,792,966
|
I_kwDODunzps6z7yAG
| 7,537
|
`datasets.map(..., num_proc=4)` multi-processing fails
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/faaany",
"id": 24477841,
"login": "faaany",
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"repos_url": "https://api.github.com/users/faaany/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"type": "User",
"url": "https://api.github.com/users/faaany",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-04-25T01:53:47Z
| 2025-04-25T05:53:29Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
^^^^^
File "/usr/local/lib/python3.12/dist-packages/multiprocess/queues.py", line 371, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 327, in loads
return load(file, ignore, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 313, in load
return Unpickler(file, ignore=ignore, **kwds).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 525, in load
obj = StockUnpickler.load(self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 659, in _create_code
if len(args) == 16: return CodeType(*args)
^^^^^^^^^^^^^^^
TypeError: code() argument 13 must be str, not int
```
After upgrading dill to the latest 0.4.0 with "pip install --upgrade dill", it can pass. So it seems that there is a compatibility issue between dill 0.3.4 and python 3.11+, because python 3.10 works fine.
Is the dill deterministic issue mentioned in https://github.com/huggingface/datasets/blob/main/setup.py#L117) still valid? Any plan to unpin?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7537/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7537/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7272
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7272/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7272/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7272/events
|
https://github.com/huggingface/datasets/pull/7272
| 2,627,223,390
|
PR_kwDODunzps6AirL2
| 7,272
|
fix conda release worlflow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-10-31T15:56:19Z
| 2024-10-31T15:58:35Z
| 2024-10-31T15:57:29Z
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7272/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7272/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7272.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7272",
"merged_at": "2024-10-31T15:57:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7272.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7272"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5693
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5693/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5693/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5693/events
|
https://github.com/huggingface/datasets/pull/5693
| 1,649,934,749
|
PR_kwDODunzps5NYdPS
| 5,693
|
[docs] Split pattern search order
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007841 / 0.011353 (-0.003512) | 0.005640 / 0.011008 (-0.005368) | 0.096465 / 0.038508 (0.057957) | 0.036476 / 0.023109 (0.013367) | 0.306431 / 0.275898 (0.030533) | 0.339545 / 0.323480 (0.016065) | 0.006064 / 0.007986 (-0.001922) | 0.004404 / 0.004328 (0.000076) | 0.073130 / 0.004250 (0.068879) | 0.052765 / 0.037052 (0.015713) | 0.309895 / 0.258489 (0.051406) | 0.354037 / 0.293841 (0.060196) | 0.037127 / 0.128546 (-0.091420) | 0.012387 / 0.075646 (-0.063260) | 0.333503 / 0.419271 (-0.085769) | 0.059799 / 0.043533 (0.016266) | 0.305496 / 0.255139 (0.050358) | 0.324122 / 0.283200 (0.040922) | 0.107007 / 0.141683 (-0.034676) | 1.416743 / 1.452155 (-0.035411) | 1.520772 / 1.492716 (0.028055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261233 / 0.018006 (0.243227) | 0.573806 / 0.000490 (0.573316) | 0.000390 / 0.000200 (0.000190) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027672 / 0.037411 (-0.009740) | 0.112803 / 0.014526 (0.098278) | 0.121085 / 0.176557 (-0.055471) | 0.176056 / 0.737135 (-0.561080) | 0.127171 / 0.296338 (-0.169167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414756 / 0.215209 (0.199547) | 4.148743 / 2.077655 (2.071088) | 1.883940 / 1.504120 (0.379820) | 1.698771 / 1.541195 (0.157576) | 1.811926 / 1.468490 (0.343436) | 0.708293 / 4.584777 (-3.876484) | 3.780456 / 3.745712 (0.034744) | 2.098556 / 5.269862 (-3.171306) | 1.323512 / 4.565676 (-3.242164) | 0.086253 / 0.424275 (-0.338022) | 0.012587 / 0.007607 (0.004980) | 0.514824 / 0.226044 (0.288779) | 5.157415 / 2.268929 (2.888487) | 2.382519 / 55.444624 (-53.062105) | 2.014539 / 6.876477 (-4.861938) | 2.215239 / 2.142072 (0.073166) | 0.847178 / 4.805227 (-3.958049) | 0.170053 / 6.500664 (-6.330611) | 0.066461 / 0.075469 (-0.009008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199056 / 1.841788 (-0.642732) | 15.244999 / 8.074308 (7.170691) | 14.661593 / 10.191392 (4.470201) | 0.168855 / 0.680424 (-0.511569) | 0.017889 / 0.534201 (-0.516312) | 0.424961 / 0.579283 (-0.154322) | 0.428632 / 0.434364 (-0.005732) | 0.502680 / 0.540337 (-0.037658) | 0.597827 / 1.386936 (-0.789109) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005527 / 0.011008 (-0.005482) | 0.074774 / 0.038508 (0.036266) | 0.035367 / 0.023109 (0.012258) | 0.340594 / 0.275898 (0.064696) | 0.373970 / 0.323480 (0.050490) | 0.006094 / 0.007986 (-0.001892) | 0.004428 / 0.004328 (0.000100) | 0.074120 / 0.004250 (0.069869) | 0.054852 / 0.037052 (0.017800) | 0.357173 / 0.258489 (0.098684) | 0.388877 / 0.293841 (0.095036) | 0.037002 / 0.128546 (-0.091545) | 0.012337 / 0.075646 (-0.063309) | 0.086962 / 0.419271 (-0.332310) | 0.050370 / 0.043533 (0.006837) | 0.342989 / 0.255139 (0.087850) | 0.358065 / 0.283200 (0.074865) | 0.111063 / 0.141683 (-0.030620) | 1.516704 / 1.452155 (0.064549) | 1.634359 / 1.492716 (0.141643) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261493 / 0.018006 (0.243487) | 0.566288 / 0.000490 (0.565799) | 0.000439 / 0.000200 (0.000239) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030426 / 0.037411 (-0.006985) | 0.114606 / 0.014526 (0.100080) | 0.126134 / 0.176557 (-0.050423) | 0.175324 / 0.737135 (-0.561812) | 0.132766 / 0.296338 (-0.163573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426785 / 0.215209 (0.211576) | 4.243555 / 2.077655 (2.165900) | 2.089631 / 1.504120 (0.585511) | 1.994562 / 1.541195 (0.453367) | 2.140284 / 1.468490 (0.671794) | 0.698645 / 4.584777 (-3.886132) | 3.807471 / 3.745712 (0.061759) | 3.275343 / 5.269862 (-1.994519) | 1.796756 / 4.565676 (-2.768921) | 0.085986 / 0.424275 (-0.338289) | 0.012213 / 0.007607 (0.004606) | 0.536815 / 0.226044 (0.310771) | 5.344611 / 2.268929 (3.075683) | 2.498578 / 55.444624 (-52.946047) | 2.153260 / 6.876477 (-4.723217) | 2.251310 / 2.142072 (0.109237) | 0.839104 / 4.805227 (-3.966123) | 0.169639 / 6.500664 (-6.331025) | 0.065880 / 0.075469 (-0.009589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268610 / 1.841788 (-0.573178) | 15.624915 / 8.074308 (7.550606) | 15.163684 / 10.191392 (4.972292) | 0.172992 / 0.680424 (-0.507432) | 0.018154 / 0.534201 (-0.516047) | 0.440485 / 0.579283 (-0.138798) | 0.431949 / 0.434364 (-0.002415) | 0.547935 / 0.540337 (0.007597) | 0.662442 / 1.386936 (-0.724494) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-31T19:51:38Z
| 2023-04-03T18:43:30Z
| 2023-04-03T18:29:58Z
|
MEMBER
| null | null | null |
This PR addresses #5681 about the order of split patterns π€ Datasets searches for when generating dataset splits.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5693/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5693/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5693",
"merged_at": "2023-04-03T18:29:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5693"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4968
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4968/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4968/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4968/events
|
https://github.com/huggingface/datasets/pull/4968
| 1,369,312,877
|
PR_kwDODunzps4-wKkw
| 4,968
|
Support streaming compguesswhat dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-09-12T05:42:24Z
| 2022-09-12T08:00:06Z
| 2022-09-12T07:58:06Z
|
MEMBER
| null | null | null |
Support streaming `compguesswhat` dataset.
Fix #3191.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4968/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4968/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"merged_at": "2022-09-12T07:58:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6921/events
|
https://github.com/huggingface/datasets/pull/6921
| 2,318,394,398
|
PR_kwDODunzps5wn4Dz
| 6,921
|
Support fsspec 2024.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6921). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005252 / 0.011353 (-0.006100) | 0.003752 / 0.011008 (-0.007257) | 0.064034 / 0.038508 (0.025526) | 0.031205 / 0.023109 (0.008096) | 0.248903 / 0.275898 (-0.026995) | 0.275808 / 0.323480 (-0.047671) | 0.003135 / 0.007986 (-0.004851) | 0.002635 / 0.004328 (-0.001693) | 0.049869 / 0.004250 (0.045619) | 0.047602 / 0.037052 (0.010549) | 0.259738 / 0.258489 (0.001249) | 0.296131 / 0.293841 (0.002290) | 0.027467 / 0.128546 (-0.101080) | 0.010449 / 0.075646 (-0.065197) | 0.201369 / 0.419271 (-0.217903) | 0.036317 / 0.043533 (-0.007216) | 0.244347 / 0.255139 (-0.010792) | 0.267597 / 0.283200 (-0.015602) | 0.019930 / 0.141683 (-0.121753) | 1.149012 / 1.452155 (-0.303143) | 1.188083 / 1.492716 (-0.304633) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095190 / 0.018006 (0.077184) | 0.300705 / 0.000490 (0.300215) | 0.000222 / 0.000200 (0.000022) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019297 / 0.037411 (-0.018115) | 0.063183 / 0.014526 (0.048657) | 0.075094 / 0.176557 (-0.101463) | 0.123556 / 0.737135 (-0.613579) | 0.076721 / 0.296338 (-0.219618) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284136 / 0.215209 (0.068927) | 2.814041 / 2.077655 (0.736387) | 1.471038 / 1.504120 (-0.033082) | 1.344002 / 1.541195 (-0.197193) | 1.353875 / 1.468490 (-0.114615) | 0.599495 / 4.584777 (-3.985282) | 2.394491 / 3.745712 (-1.351221) | 2.781734 / 5.269862 (-2.488128) | 1.729829 / 4.565676 (-2.835848) | 0.064194 / 0.424275 (-0.360081) | 0.005022 / 0.007607 (-0.002585) | 0.343384 / 0.226044 (0.117340) | 3.357067 / 2.268929 (1.088139) | 1.816323 / 55.444624 (-53.628301) | 1.549405 / 6.876477 (-5.327072) | 1.594394 / 2.142072 (-0.547679) | 0.660650 / 4.805227 (-4.144578) | 0.120271 / 6.500664 (-6.380393) | 0.042422 / 0.075469 (-0.033047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975776 / 1.841788 (-0.866011) | 11.828093 / 8.074308 (3.753784) | 9.384164 / 10.191392 (-0.807228) | 0.140761 / 0.680424 (-0.539663) | 0.014038 / 0.534201 (-0.520163) | 0.284904 / 0.579283 (-0.294379) | 0.263430 / 0.434364 (-0.170934) | 0.320856 / 0.540337 (-0.219482) | 0.419199 / 1.386936 (-0.967737) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005672 / 0.011353 (-0.005681) | 0.003667 / 0.011008 (-0.007341) | 0.049989 / 0.038508 (0.011481) | 0.033115 / 0.023109 (0.010006) | 0.269808 / 0.275898 (-0.006090) | 0.293286 / 0.323480 (-0.030193) | 0.004238 / 0.007986 (-0.003748) | 0.002722 / 0.004328 (-0.001606) | 0.049516 / 0.004250 (0.045265) | 0.042076 / 0.037052 (0.005024) | 0.282182 / 0.258489 (0.023693) | 0.310817 / 0.293841 (0.016976) | 0.029824 / 0.128546 (-0.098722) | 0.010516 / 0.075646 (-0.065130) | 0.058223 / 0.419271 (-0.361049) | 0.033263 / 0.043533 (-0.010270) | 0.268769 / 0.255139 (0.013630) | 0.288308 / 0.283200 (0.005108) | 0.018531 / 0.141683 (-0.123151) | 1.136806 / 1.452155 (-0.315349) | 1.192636 / 1.492716 (-0.300080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096583 / 0.018006 (0.078577) | 0.303678 / 0.000490 (0.303188) | 0.000211 / 0.000200 (0.000011) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022741 / 0.037411 (-0.014670) | 0.075799 / 0.014526 (0.061273) | 0.089930 / 0.176557 (-0.086626) | 0.129093 / 0.737135 (-0.608042) | 0.089672 / 0.296338 (-0.206666) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.292789 / 0.215209 (0.077580) | 2.860137 / 2.077655 (0.782483) | 1.566678 / 1.504120 (0.062558) | 1.437756 / 1.541195 (-0.103439) | 1.472347 / 1.468490 (0.003857) | 0.566814 / 4.584777 (-4.017963) | 0.963918 / 3.745712 (-2.781794) | 2.717199 / 5.269862 (-2.552663) | 1.763612 / 4.565676 (-2.802064) | 0.063601 / 0.424275 (-0.360674) | 0.005308 / 0.007607 (-0.002299) | 0.363111 / 0.226044 (0.137066) | 3.458222 / 2.268929 (1.189293) | 1.939185 / 55.444624 (-53.505440) | 1.659552 / 6.876477 (-5.216925) | 1.801006 / 2.142072 (-0.341067) | 0.648884 / 4.805227 (-4.156343) | 0.116259 / 6.500664 (-6.384405) | 0.041384 / 0.075469 (-0.034085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.001594 / 1.841788 (-0.840194) | 12.371125 / 8.074308 (4.296817) | 10.489763 / 10.191392 (0.298371) | 0.132500 / 0.680424 (-0.547924) | 0.014742 / 0.534201 (-0.519459) | 0.282258 / 0.579283 (-0.297026) | 0.122755 / 0.434364 (-0.311608) | 0.346068 / 0.540337 (-0.194269) | 0.424943 / 1.386936 (-0.961994) |\n\n</details>\n</details>\n\n\n"
] | 2024-05-27T07:00:59Z
| 2024-05-27T08:07:16Z
| 2024-05-27T08:01:08Z
|
MEMBER
| null | null | null |
Support fsspec 2024.5.0.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6921/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6921/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6921",
"merged_at": "2024-05-27T08:01:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6921"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6177
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6177/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6177/events
|
https://github.com/huggingface/datasets/pull/6177
| 1,865,490,962
|
PR_kwDODunzps5Ytky-
| 6,177
|
Use object detection images from `huggingface/documentation-images`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005847 / 0.011353 (-0.005506) | 0.003488 / 0.011008 (-0.007521) | 0.079545 / 0.038508 (0.041037) | 0.055114 / 0.023109 (0.032005) | 0.312694 / 0.275898 (0.036796) | 0.338808 / 0.323480 (0.015329) | 0.004573 / 0.007986 (-0.003413) | 0.002818 / 0.004328 (-0.001510) | 0.062102 / 0.004250 (0.057852) | 0.044072 / 0.037052 (0.007019) | 0.317682 / 0.258489 (0.059192) | 0.354139 / 0.293841 (0.060298) | 0.026905 / 0.128546 (-0.101641) | 0.007990 / 0.075646 (-0.067656) | 0.260071 / 0.419271 (-0.159201) | 0.043658 / 0.043533 (0.000125) | 0.313828 / 0.255139 (0.058689) | 0.339678 / 0.283200 (0.056478) | 0.020076 / 0.141683 (-0.121607) | 1.446321 / 1.452155 (-0.005834) | 1.527046 / 1.492716 (0.034330) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197801 / 0.018006 (0.179795) | 0.432874 / 0.000490 (0.432385) | 0.004093 / 0.000200 (0.003893) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023505 / 0.037411 (-0.013906) | 0.072377 / 0.014526 (0.057852) | 0.081058 / 0.176557 (-0.095498) | 0.141628 / 0.737135 (-0.595507) | 0.081622 / 0.296338 (-0.214716) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395005 / 0.215209 (0.179795) | 3.949006 / 2.077655 (1.871352) | 1.934028 / 1.504120 (0.429908) | 1.756065 / 1.541195 (0.214871) | 1.778719 / 1.468490 (0.310229) | 0.501279 / 4.584777 (-4.083498) | 3.032120 / 3.745712 (-0.713592) | 2.859751 / 5.269862 (-2.410110) | 1.885924 / 4.565676 (-2.679753) | 0.057236 / 0.424275 (-0.367039) | 0.006704 / 0.007607 (-0.000903) | 0.465794 / 0.226044 (0.239750) | 4.648622 / 2.268929 (2.379694) | 2.345649 / 55.444624 (-53.098975) | 1.981122 / 6.876477 (-4.895355) | 2.148235 / 2.142072 (0.006163) | 0.591466 / 4.805227 (-4.213761) | 0.125262 / 6.500664 (-6.375402) | 0.061305 / 0.075469 (-0.014164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243932 / 1.841788 (-0.597856) | 17.912110 / 8.074308 (9.837802) | 13.662097 / 10.191392 (3.470705) | 0.148051 / 0.680424 (-0.532373) | 0.016778 / 0.534201 (-0.517423) | 0.340342 / 0.579283 (-0.238941) | 0.351720 / 0.434364 (-0.082644) | 0.377837 / 0.540337 (-0.162501) | 0.521163 / 1.386936 (-0.865774) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006011 / 0.011353 (-0.005342) | 0.003549 / 0.011008 (-0.007459) | 0.063579 / 0.038508 (0.025071) | 0.056196 / 0.023109 (0.033087) | 0.448879 / 0.275898 (0.172981) | 0.491542 / 0.323480 (0.168062) | 0.004597 / 0.007986 (-0.003389) | 0.002790 / 0.004328 (-0.001539) | 0.063257 / 0.004250 (0.059006) | 0.045653 / 0.037052 (0.008600) | 0.459714 / 0.258489 (0.201225) | 0.491371 / 0.293841 (0.197530) | 0.028124 / 0.128546 (-0.100422) | 0.008016 / 0.075646 (-0.067630) | 0.069418 / 0.419271 (-0.349853) | 0.040393 / 0.043533 (-0.003140) | 0.450978 / 0.255139 (0.195839) | 0.472075 / 0.283200 (0.188875) | 0.020006 / 0.141683 (-0.121677) | 1.451946 / 1.452155 (-0.000209) | 1.513557 / 1.492716 (0.020840) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225416 / 0.018006 (0.207410) | 0.412287 / 0.000490 (0.411797) | 0.004075 / 0.000200 (0.003875) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025949 / 0.037411 (-0.011463) | 0.080633 / 0.014526 (0.066108) | 0.089960 / 0.176557 (-0.086597) | 0.144530 / 0.737135 (-0.592606) | 0.091427 / 0.296338 (-0.204911) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462311 / 0.215209 (0.247102) | 4.605063 / 2.077655 (2.527408) | 2.541083 / 1.504120 (1.036963) | 2.356341 / 1.541195 (0.815147) | 2.389824 / 1.468490 (0.921334) | 0.507397 / 4.584777 (-4.077380) | 3.079023 / 3.745712 (-0.666689) | 2.792025 / 5.269862 (-2.477837) | 1.846931 / 4.565676 (-2.718746) | 0.058422 / 0.424275 (-0.365853) | 0.006409 / 0.007607 (-0.001199) | 0.530648 / 0.226044 (0.304604) | 5.321030 / 2.268929 (3.052101) | 2.978335 / 55.444624 (-52.466289) | 2.641188 / 6.876477 (-4.235288) | 2.780450 / 2.142072 (0.638378) | 0.593864 / 4.805227 (-4.211363) | 0.125394 / 6.500664 (-6.375270) | 0.061432 / 0.075469 (-0.014037) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337142 / 1.841788 (-0.504646) | 18.841575 / 8.074308 (10.767267) | 14.678622 / 10.191392 (4.487230) | 0.144491 / 0.680424 (-0.535933) | 0.018145 / 0.534201 (-0.516056) | 0.339376 / 0.579283 (-0.239907) | 0.339129 / 0.434364 (-0.095235) | 0.394842 / 0.540337 (-0.145495) | 0.547924 / 1.386936 (-0.839012) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006478 / 0.011353 (-0.004875) | 0.003845 / 0.011008 (-0.007163) | 0.084179 / 0.038508 (0.045671) | 0.071327 / 0.023109 (0.048217) | 0.315206 / 0.275898 (0.039308) | 0.353477 / 0.323480 (0.029997) | 0.005267 / 0.007986 (-0.002719) | 0.003282 / 0.004328 (-0.001046) | 0.064062 / 0.004250 (0.059811) | 0.051940 / 0.037052 (0.014888) | 0.332004 / 0.258489 (0.073515) | 0.363199 / 0.293841 (0.069358) | 0.030546 / 0.128546 (-0.098000) | 0.008453 / 0.075646 (-0.067193) | 0.287636 / 0.419271 (-0.131636) | 0.051999 / 0.043533 (0.008466) | 0.325220 / 0.255139 (0.070081) | 0.355324 / 0.283200 (0.072125) | 0.023417 / 0.141683 (-0.118266) | 1.473370 / 1.452155 (0.021215) | 1.596903 / 1.492716 (0.104186) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212645 / 0.018006 (0.194638) | 0.463766 / 0.000490 (0.463276) | 0.002834 / 0.000200 (0.002634) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028424 / 0.037411 (-0.008987) | 0.082188 / 0.014526 (0.067662) | 0.777186 / 0.176557 (0.600629) | 0.218290 / 0.737135 (-0.518845) | 0.099098 / 0.296338 (-0.197240) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.387138 / 0.215209 (0.171929) | 3.845655 / 2.077655 (1.768000) | 1.929812 / 1.504120 (0.425692) | 1.718263 / 1.541195 (0.177069) | 1.760933 / 1.468490 (0.292443) | 0.475171 / 4.584777 (-4.109606) | 3.523366 / 3.745712 (-0.222346) | 3.167322 / 5.269862 (-2.102540) | 1.975164 / 4.565676 (-2.590513) | 0.056106 / 0.424275 (-0.368169) | 0.007448 / 0.007607 (-0.000159) | 0.459824 / 0.226044 (0.233779) | 4.590566 / 2.268929 (2.321638) | 2.377968 / 55.444624 (-53.066656) | 2.034052 / 6.876477 (-4.842425) | 2.224976 / 2.142072 (0.082904) | 0.575901 / 4.805227 (-4.229326) | 0.131546 / 6.500664 (-6.369118) | 0.059266 / 0.075469 (-0.016203) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.254783 / 1.841788 (-0.587005) | 19.497795 / 8.074308 (11.423487) | 13.937672 / 10.191392 (3.746280) | 0.164092 / 0.680424 (-0.516332) | 0.017915 / 0.534201 (-0.516286) | 0.391430 / 0.579283 (-0.187853) | 0.403681 / 0.434364 (-0.030683) | 0.457711 / 0.540337 (-0.082626) | 0.620395 / 1.386936 (-0.766541) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006793 / 0.011353 (-0.004560) | 0.004101 / 0.011008 (-0.006907) | 0.064780 / 0.038508 (0.026272) | 0.071087 / 0.023109 (0.047977) | 0.401963 / 0.275898 (0.126065) | 0.433085 / 0.323480 (0.109605) | 0.005348 / 0.007986 (-0.002638) | 0.003289 / 0.004328 (-0.001039) | 0.065209 / 0.004250 (0.060958) | 0.054202 / 0.037052 (0.017150) | 0.405629 / 0.258489 (0.147140) | 0.440326 / 0.293841 (0.146485) | 0.032283 / 0.128546 (-0.096263) | 0.008510 / 0.075646 (-0.067137) | 0.071144 / 0.419271 (-0.348127) | 0.047414 / 0.043533 (0.003881) | 0.402065 / 0.255139 (0.146926) | 0.421217 / 0.283200 (0.138017) | 0.021924 / 0.141683 (-0.119759) | 1.490067 / 1.452155 (0.037913) | 1.539134 / 1.492716 (0.046417) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280072 / 0.018006 (0.262066) | 0.456130 / 0.000490 (0.455641) | 0.020926 / 0.000200 (0.020726) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032040 / 0.037411 (-0.005371) | 0.092343 / 0.014526 (0.077817) | 0.104866 / 0.176557 (-0.071690) | 0.156631 / 0.737135 (-0.580505) | 0.107203 / 0.296338 (-0.189136) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426268 / 0.215209 (0.211059) | 4.255539 / 2.077655 (2.177884) | 2.285077 / 1.504120 (0.780957) | 2.114277 / 1.541195 (0.573083) | 2.159242 / 1.468490 (0.690752) | 0.489421 / 4.584777 (-4.095356) | 3.630797 / 3.745712 (-0.114915) | 3.205238 / 5.269862 (-2.064624) | 1.985846 / 4.565676 (-2.579830) | 0.057436 / 0.424275 (-0.366839) | 0.007154 / 0.007607 (-0.000454) | 0.507294 / 0.226044 (0.281250) | 5.050105 / 2.268929 (2.781176) | 2.750474 / 55.444624 (-52.694151) | 2.404116 / 6.876477 (-4.472360) | 2.576483 / 2.142072 (0.434411) | 0.584909 / 4.805227 (-4.220318) | 0.130695 / 6.500664 (-6.369969) | 0.059743 / 0.075469 (-0.015726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352702 / 1.841788 (-0.489086) | 19.687944 / 8.074308 (11.613636) | 14.991847 / 10.191392 (4.800455) | 0.185164 / 0.680424 (-0.495260) | 0.020314 / 0.534201 (-0.513887) | 0.395162 / 0.579283 (-0.184121) | 0.408917 / 0.434364 (-0.025447) | 0.467049 / 0.540337 (-0.073288) | 0.649209 / 1.386936 (-0.737727) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006142 / 0.011353 (-0.005211) | 0.003621 / 0.011008 (-0.007387) | 0.079880 / 0.038508 (0.041372) | 0.059283 / 0.023109 (0.036173) | 0.310971 / 0.275898 (0.035072) | 0.351620 / 0.323480 (0.028140) | 0.003453 / 0.007986 (-0.004532) | 0.003785 / 0.004328 (-0.000543) | 0.062395 / 0.004250 (0.058145) | 0.047614 / 0.037052 (0.010562) | 0.312688 / 0.258489 (0.054199) | 0.363762 / 0.293841 (0.069921) | 0.027051 / 0.128546 (-0.101495) | 0.007920 / 0.075646 (-0.067726) | 0.261080 / 0.419271 (-0.158192) | 0.044476 / 0.043533 (0.000943) | 0.312615 / 0.255139 (0.057476) | 0.343672 / 0.283200 (0.060472) | 0.022723 / 0.141683 (-0.118960) | 1.441449 / 1.452155 (-0.010706) | 1.509253 / 1.492716 (0.016536) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193171 / 0.018006 (0.175165) | 0.434771 / 0.000490 (0.434281) | 0.003114 / 0.000200 (0.002914) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024209 / 0.037411 (-0.013203) | 0.073891 / 0.014526 (0.059365) | 0.083497 / 0.176557 (-0.093060) | 0.144962 / 0.737135 (-0.592173) | 0.084594 / 0.296338 (-0.211745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392512 / 0.215209 (0.177303) | 3.912692 / 2.077655 (1.835037) | 1.914010 / 1.504120 (0.409890) | 1.743827 / 1.541195 (0.202632) | 1.829244 / 1.468490 (0.360753) | 0.497740 / 4.584777 (-4.087037) | 2.979222 / 3.745712 (-0.766490) | 2.849786 / 5.269862 (-2.420076) | 1.874411 / 4.565676 (-2.691265) | 0.057270 / 0.424275 (-0.367005) | 0.006673 / 0.007607 (-0.000934) | 0.460724 / 0.226044 (0.234679) | 4.600617 / 2.268929 (2.331689) | 2.333178 / 55.444624 (-53.111446) | 1.999902 / 6.876477 (-4.876575) | 2.170600 / 2.142072 (0.028528) | 0.587716 / 4.805227 (-4.217511) | 0.126374 / 6.500664 (-6.374290) | 0.061926 / 0.075469 (-0.013543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.229767 / 1.841788 (-0.612021) | 18.494462 / 8.074308 (10.420154) | 13.799801 / 10.191392 (3.608409) | 0.137952 / 0.680424 (-0.542472) | 0.017037 / 0.534201 (-0.517164) | 0.333252 / 0.579283 (-0.246031) | 0.357276 / 0.434364 (-0.077088) | 0.380069 / 0.540337 (-0.160268) | 0.526968 / 1.386936 (-0.859968) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006185 / 0.011353 (-0.005168) | 0.003595 / 0.011008 (-0.007413) | 0.063371 / 0.038508 (0.024863) | 0.060461 / 0.023109 (0.037351) | 0.455016 / 0.275898 (0.179118) | 0.490505 / 0.323480 (0.167026) | 0.004738 / 0.007986 (-0.003247) | 0.002852 / 0.004328 (-0.001477) | 0.064161 / 0.004250 (0.059910) | 0.047411 / 0.037052 (0.010359) | 0.453815 / 0.258489 (0.195326) | 0.485354 / 0.293841 (0.191513) | 0.028358 / 0.128546 (-0.100188) | 0.008101 / 0.075646 (-0.067545) | 0.068399 / 0.419271 (-0.350873) | 0.040928 / 0.043533 (-0.002605) | 0.462263 / 0.255139 (0.207124) | 0.478773 / 0.283200 (0.195574) | 0.019787 / 0.141683 (-0.121896) | 1.475798 / 1.452155 (0.023643) | 1.563890 / 1.492716 (0.071174) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239701 / 0.018006 (0.221695) | 0.417442 / 0.000490 (0.416953) | 0.005895 / 0.000200 (0.005695) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026155 / 0.037411 (-0.011256) | 0.081264 / 0.014526 (0.066738) | 0.089734 / 0.176557 (-0.086822) | 0.143965 / 0.737135 (-0.593171) | 0.092156 / 0.296338 (-0.204182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456420 / 0.215209 (0.241211) | 4.545675 / 2.077655 (2.468020) | 2.477141 / 1.504120 (0.973022) | 2.295142 / 1.541195 (0.753947) | 2.349525 / 1.468490 (0.881035) | 0.502485 / 4.584777 (-4.082292) | 3.072347 / 3.745712 (-0.673365) | 2.798565 / 5.269862 (-2.471296) | 1.849030 / 4.565676 (-2.716647) | 0.057789 / 0.424275 (-0.366487) | 0.006436 / 0.007607 (-0.001172) | 0.529648 / 0.226044 (0.303604) | 5.285670 / 2.268929 (3.016741) | 2.954964 / 55.444624 (-52.489660) | 2.593161 / 6.876477 (-4.283316) | 2.735254 / 2.142072 (0.593181) | 0.587635 / 4.805227 (-4.217592) | 0.124732 / 6.500664 (-6.375932) | 0.060999 / 0.075469 (-0.014470) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354957 / 1.841788 (-0.486831) | 18.803998 / 8.074308 (10.729690) | 14.902712 / 10.191392 (4.711320) | 0.146729 / 0.680424 (-0.533695) | 0.017989 / 0.534201 (-0.516212) | 0.333633 / 0.579283 (-0.245650) | 0.347685 / 0.434364 (-0.086679) | 0.386497 / 0.540337 (-0.153840) | 0.590885 / 1.386936 (-0.796051) |\n\n</details>\n</details>\n\n\n"
] | 2023-08-24T16:16:09Z
| 2023-08-25T16:30:00Z
| 2023-08-25T16:21:17Z
|
COLLABORATOR
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6177/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6177",
"merged_at": "2023-08-25T16:21:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6177"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6071
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6071/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6071/events
|
https://github.com/huggingface/datasets/issues/6071
| 1,821,990,749
|
I_kwDODunzps5smV9d
| 6,071
|
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much π \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a `fsspec.implementations.arrow.ArrowFSWrapper` [to make it](https://arrow.apache.org/docs/python/filesystems.html#using-arrow-filesystems-with-fsspec) `fsspec` compatible). I also register it as an entrypoint with `fsspec` so that it's the one that gets automatically resolved when looking for filesystems for the `s3` protocol\r\n\r\nIn my case the `storage_option` that seemed not getting piped through was the filesystem's `endpoint_override` that I use in some tests to point at a mock S3 bucket"
] | 2023-07-26T09:37:20Z
| 2023-07-27T12:42:58Z
| 2023-07-27T12:42:58Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6071/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6919
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6919/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6919/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6919/events
|
https://github.com/huggingface/datasets/issues/6919
| 2,315,618,993
|
I_kwDODunzps6KBYqx
| 6,919
|
Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple>
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/67964?v=4",
"events_url": "https://api.github.com/users/juanqui/events{/privacy}",
"followers_url": "https://api.github.com/users/juanqui/followers",
"following_url": "https://api.github.com/users/juanqui/following{/other_user}",
"gists_url": "https://api.github.com/users/juanqui/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/juanqui",
"id": 67964,
"login": "juanqui",
"node_id": "MDQ6VXNlcjY3OTY0",
"organizations_url": "https://api.github.com/users/juanqui/orgs",
"received_events_url": "https://api.github.com/users/juanqui/received_events",
"repos_url": "https://api.github.com/users/juanqui/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/juanqui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juanqui/subscriptions",
"type": "User",
"url": "https://api.github.com/users/juanqui",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-05-24T14:59:45Z
| 2024-05-24T14:59:45Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](http://192.168.1.128:8888/tuple)> (50:11)
47 | - 4
48 | - 4
49 | - 8
50 | - !!binary |
----------------^
51 | TwAAAA==
52 | '1': !!python[/object/apply](http://192.168.1.128:8888/object/apply):nump ...
```
My dataset has a `train` and `validation` dataset. These are the features:
```
{'c1': Value(dtype='string', id=None),
'c2': Value(dtype='string', id=None),
'c3': [{'value': Value(dtype='string', id=None),
'start': Value(dtype='int64', id=None),
'end': Value(dtype='int64', id=None),
'label': Value(dtype='string', id=None)}],
'c4': Value(dtype='string', id=None),
'c5': Value(dtype='string', id=None),
'c6': Value(dtype='string', id=None),
'c7': Value(dtype='string', id=None),
'c8': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None),
'c9': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'c10': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None),
'labels': Sequence(feature=ClassLabel(names=['O', 'B-ABC', 'I-ABC', ...], id=None), length=-1, id=None),
'c12': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
```
This used to work until I decided to cast the `labels` feature to a `Sequence(ClassLabel(...))` type with:
```
ds['train'] = ds['train'].cast_column("labels", Sequence(ClassLabel(names=list(labels))))
ds['validation'] = ds['validation'].cast_column("labels", Sequence(ClassLabel(names=list(labels))))
```
### Steps to reproduce the bug
1. Start with any token classification dataset.
2. Add a `labels` column with data such as `[0,0,0,12,13,13,13,0,0]`.
3. Cast the label column from `Sequence` to `Sequence(ClassLabel))` with:
```
labels = ['O', 'B-TEST', 'I-TEST']
ds = ds.cast_column("labels", Sequence(ClassLabel(names=labels)))
```
4. Push to hub with `ds.push_to_hub("me/awesome-stuff-dataset")`
### Expected behavior
I expected `push_to_hub` to successfully push my dataset to the hub without error.
### Environment info
Python 3.11.9
datasets==2.19.1
transformers==4.41.1
PyYAML==6.0.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6919/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6919/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6897/events
|
https://github.com/huggingface/datasets/issues/6897
| 2,293,428,243
|
I_kwDODunzps6IsvAT
| 6,897
|
datasets template guide :: issue in documentation YAML
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4",
"events_url": "https://api.github.com/users/bghira/events{/privacy}",
"followers_url": "https://api.github.com/users/bghira/followers",
"following_url": "https://api.github.com/users/bghira/following{/other_user}",
"gists_url": "https://api.github.com/users/bghira/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bghira",
"id": 59658056,
"login": "bghira",
"node_id": "MDQ6VXNlcjU5NjU4MDU2",
"organizations_url": "https://api.github.com/users/bghira/orgs",
"received_events_url": "https://api.github.com/users/bghira/received_events",
"repos_url": "https://api.github.com/users/bghira/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bghira/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bghira",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML error message at the top of the page: \r\n\r\n\r\nI am proposing a change to make the YAML error disappear.",
"thanks albert! i looked at it for a while to figure it out. i think the `raw` view option is the correct way to look at it?"
] | 2024-05-13T17:33:59Z
| 2024-05-16T14:28:17Z
| 2024-05-16T14:28:17Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the document remains functional
### Expected behavior
I think the YAML block should be displayed or ignored.
### Environment info
N/A
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6897/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6782
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6782/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6782/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6782/events
|
https://github.com/huggingface/datasets/issues/6782
| 2,228,081,955
|
I_kwDODunzps6EzdUj
| 6,782
|
Image cast_storage very slow for arrays (e.g. numpy, tensors)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n shape = ()\r\n while isinstance(arr, pa.ListArray):\r\n len_curr = len(arr)\r\n arr = arr.flatten()\r\n len_new = len(arr)\r\n shape = shape + (len_new // len_curr,)\r\n return shape\r\n\r\n def get_dtypes(arr):\r\n dtype = storage.type\r\n while hasattr(dtype, \"value_type\"):\r\n dtype = dtype.value_type\r\n return dtype\r\n\r\n arrays = []\r\n for i, is_null in enumerate(storage.is_null()):\r\n if not is_null.as_py():\r\n storage_part = storage.take([i])\r\n shape = get_shapes(storage_part)\r\n dtype = get_dtypes(storage_part)\r\n\r\n extension_type = Array3DExtensionType(shape=shape, dtype=str(dtype))\r\n array = pa.ExtensionArray.from_storage(extension_type, storage_part)\r\n arrays.append(array.to_numpy().squeeze(0))\r\n else:\r\n arrays.append(None)\r\n\r\n bytes_array = pa.array(\r\n [encode_np_array(arr)[\"bytes\"] if arr is not None else None for arr in arrays],\r\n type=pa.binary(),\r\n )\r\n path_array = pa.array([None] * len(storage), type=pa.string())\r\n storage = pa.StructArray.from_arrays(\r\n [bytes_array, path_array], [\"bytes\", \"path\"], mask=bytes_array.is_null()\r\n )\r\n```\r\n(Edited): to handle nulls\r\n\r\nNotably this doesn't change anything about the passing through of data or other things, just in the `Image` class.\r\nSeems quite fast:\r\n```bash\r\nFri Apr 5 17:55:51 2024 restats\r\n\r\n 63818 function calls (61995 primitive calls) in 0.812 seconds\r\n\r\n Ordered by: cumulative time\r\n List reduced from 1051 to 20 due to restriction <20>\r\n\r\n ncalls tottime percall cumtime percall filename:lineno(function)\r\n 47/1 0.000 0.000 0.810 0.810 {built-in method builtins.exec}\r\n 2/1 0.000 0.000 0.810 0.810 <string>:1(<module>)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:594(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:551(wrapper)\r\n 2/1 0.000 0.000 0.809 0.809 arrow_dataset.py:2916(map)\r\n 3 0.000 0.000 0.807 0.269 arrow_dataset.py:3277(_map_single)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:589(finalize)\r\n 1 0.000 0.000 0.760 0.760 arrow_writer.py:423(write_examples_on_file)\r\n 1 0.000 0.000 0.759 0.759 arrow_writer.py:527(write_batch)\r\n 1 0.001 0.001 0.754 0.754 arrow_writer.py:161(__arrow_array__)\r\n 2/1 0.000 0.000 0.719 0.719 table.py:1800(wrapper)\r\n 1 0.000 0.000 0.719 0.719 table.py:1950(cast_array_to_feature)\r\n 1 0.006 0.006 0.718 0.718 image.py:209(cast_storage)\r\n 1 0.000 0.000 0.451 0.451 image.py:361(encode_np_array)\r\n 1 0.000 0.000 0.444 0.444 image.py:343(image_to_bytes)\r\n 1 0.000 0.000 0.413 0.413 Image.py:2376(save)\r\n 1 0.000 0.000 0.413 0.413 PngImagePlugin.py:1233(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:517(_save)\r\n 1 0.000 0.000 0.413 0.413 ImageFile.py:545(_encode_tile)\r\n 397 0.409 0.001 0.409 0.001 {method 'encode' of 'ImagingEncoder' objects}\r\n```",
"Also encounter this problem. Has been strugging with it for a long time...",
"This actually applies to all arrays (numpy or tensors like in torch), not only from external files.\r\n```python\r\nimport numpy as np\r\nimport datasets\r\n\r\nds = datasets.Dataset.from_dict(\r\n {\"image\": [np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8)]},\r\n features=datasets.Features({\"image\": datasets.Image(decode=True)}),\r\n)\r\nds.set_format(\"numpy\")\r\n\r\nds = ds.map(load_from_cache_file=False)\r\n```"
] | 2024-04-05T13:46:54Z
| 2024-04-10T14:36:13Z
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Update: see comments below
### Describe the bug
Operations that save an image from a path are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is already slow but used on a multi-dimensional numpy array such as an image it takes a very long time.
From the trace below we can see that `__arrow_array__` takes a long time.
It is currently also called in `get_inferred_type`, this should be removable #6781 but doesn't change the underyling issue.
The conversion to `pyarrow` and back also leads to the `numpy` array having type `int64` which causes a warning message because the image type excepts `uint8`.
However, originally the `numpy` image array was in `uint8`.
### Steps to reproduce the bug
```python
from PIL import Image
import numpy as np
import datasets
import cProfile
image = Image.fromarray(np.random.randint(0, 255, (2048, 2048, 3), dtype=np.uint8))
image.save("test_image.jpg")
ds = datasets.Dataset.from_dict(
{"image": ["test_image.jpg"]},
features=datasets.Features({"image": datasets.Image(decode=True)}),
)
# load as numpy array, e.g. for further processing with map
# same result as map returning numpy arrays
ds.set_format("numpy")
cProfile.run("ds.map(writer_batch_size=1, load_from_cache_file=False)", "restats")
```
```bash
Fri Apr 5 14:56:17 2024 restats
66817 function calls (64992 primitive calls) in 33.382 seconds
Ordered by: cumulative time
List reduced from 1073 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
46/1 0.000 0.000 33.382 33.382 {built-in method builtins.exec}
1 0.000 0.000 33.382 33.382 <string>:1(<module>)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:594(wrapper)
1 0.000 0.000 33.382 33.382 arrow_dataset.py:551(wrapper)
1 0.000 0.000 33.379 33.379 arrow_dataset.py:2916(map)
4 0.000 0.000 33.327 8.332 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 33.311 33.311 arrow_writer.py:465(write)
2 0.000 0.000 33.311 16.656 arrow_writer.py:423(write_examples_on_file)
1 0.000 0.000 33.311 33.311 arrow_writer.py:527(write_batch)
2 14.484 7.242 33.260 16.630 arrow_writer.py:161(__arrow_array__)
1 0.001 0.001 16.438 16.438 arrow_writer.py:121(get_inferred_type)
1 0.000 0.000 14.398 14.398 threading.py:637(wait)
1 0.000 0.000 14.398 14.398 threading.py:323(wait)
8 14.398 1.800 14.398 1.800 {method 'acquire' of '_thread.lock' objects}
4/2 0.000 0.000 4.337 2.169 table.py:1800(wrapper)
2 0.000 0.000 4.337 2.169 table.py:1950(cast_array_to_feature)
2 0.475 0.238 4.337 2.169 image.py:209(cast_storage)
9 2.583 0.287 2.583 0.287 {built-in method numpy.array}
2 0.000 0.000 1.284 0.642 image.py:319(encode_np_array)
2 0.000 0.000 1.246 0.623 image.py:301(image_to_bytes)
```
### Expected behavior
The `numpy` image data should be passed through as it will be directly consumed by `pillow` to convert it to bytes.
As an example one can replace `list_of_np_array_to_pyarrow_listarray(data)` in `__arrow_array__` with just `out = data` as a test.
We have to change `cast_storage` of the `Image` feature so it handles the passed through data (& if to handle type before)
```python
bytes_array = pa.array(
[encode_np_array(arr)["bytes"] if arr is not None else None for arr in storage],
type=pa.binary(),
)
```
Leading to the following:
```bash
Fri Apr 5 15:44:27 2024 restats
66419 function calls (64595 primitive calls) in 0.937 seconds
Ordered by: cumulative time
List reduced from 1023 to 20 due to restriction <20>
ncalls tottime percall cumtime percall filename:lineno(function)
47/1 0.000 0.000 0.935 0.935 {built-in method builtins.exec}
2/1 0.000 0.000 0.935 0.935 <string>:1(<module>)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:594(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:551(wrapper)
2/1 0.000 0.000 0.934 0.934 arrow_dataset.py:2916(map)
4 0.000 0.000 0.933 0.233 arrow_dataset.py:3277(_map_single)
1 0.000 0.000 0.883 0.883 arrow_writer.py:466(write)
2 0.000 0.000 0.883 0.441 arrow_writer.py:424(write_examples_on_file)
1 0.000 0.000 0.882 0.882 arrow_writer.py:528(write_batch)
2 0.000 0.000 0.877 0.439 arrow_writer.py:161(__arrow_array__)
4/2 0.000 0.000 0.877 0.439 table.py:1800(wrapper)
2 0.000 0.000 0.877 0.439 table.py:1950(cast_array_to_feature)
2 0.009 0.005 0.877 0.439 image.py:209(cast_storage)
2 0.000 0.000 0.868 0.434 image.py:335(encode_np_array)
2 0.000 0.000 0.856 0.428 image.py:317(image_to_bytes)
2 0.000 0.000 0.822 0.411 Image.py:2376(save)
2 0.000 0.000 0.822 0.411 PngImagePlugin.py:1233(_save)
2 0.000 0.000 0.822 0.411 ImageFile.py:517(_save)
2 0.000 0.000 0.821 0.411 ImageFile.py:545(_encode_tile)
589 0.803 0.001 0.803 0.001 {method 'encode' of 'ImagingEncoder' objects}
```
This is of course only a test as it passes through all `numpy` arrays irrespective of if they should be an image.
Also I guess `cast_storage` is meant for casting `pyarrow` storage exclusively.
Converting to `pyarrow` array seems like a good solution as it also handles `pytorch` tensors etc., maybe there is a more efficient way to create a PIL image from a `pyarrow` array?
Not sure how this should be handled but I would be happy to help if there is a good solution.
### Environment info
- `datasets` version: 2.18.1.dev0
- Platform: Linux-6.7.11-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.3.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6782/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6782/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6375
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6375/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6375/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6375/events
|
https://github.com/huggingface/datasets/pull/6375
| 1,973,877,879
|
PR_kwDODunzps5eacao
| 6,375
|
Temporarily pin pyarrow < 14.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008947 / 0.011353 (-0.002406) | 0.005602 / 0.011008 (-0.005406) | 0.111208 / 0.038508 (0.072700) | 0.082750 / 0.023109 (0.059641) | 0.453277 / 0.275898 (0.177379) | 0.480072 / 0.323480 (0.156592) | 0.005254 / 0.007986 (-0.002731) | 0.005421 / 0.004328 (0.001092) | 0.082899 / 0.004250 (0.078648) | 0.062859 / 0.037052 (0.025807) | 0.466703 / 0.258489 (0.208214) | 0.478241 / 0.293841 (0.184400) | 0.050754 / 0.128546 (-0.077792) | 0.017726 / 0.075646 (-0.057920) | 0.374830 / 0.419271 (-0.044442) | 0.068577 / 0.043533 (0.025044) | 0.453643 / 0.255139 (0.198504) | 0.453736 / 0.283200 (0.170537) | 0.037313 / 0.141683 (-0.104369) | 1.741215 / 1.452155 (0.289060) | 1.862247 / 1.492716 (0.369531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314174 / 0.018006 (0.296168) | 0.644439 / 0.000490 (0.643949) | 0.013914 / 0.000200 (0.013715) | 0.000478 / 0.000054 (0.000424) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030462 / 0.037411 (-0.006949) | 0.096789 / 0.014526 (0.082263) | 0.109999 / 0.176557 (-0.066557) | 0.184610 / 0.737135 (-0.552525) | 0.113846 / 0.296338 (-0.182493) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586508 / 0.215209 (0.371299) | 5.785138 / 2.077655 (3.707484) | 2.578512 / 1.504120 (1.074392) | 2.266981 / 1.541195 (0.725786) | 2.442463 / 1.468490 (0.973973) | 0.880973 / 4.584777 (-3.703804) | 5.410327 / 3.745712 (1.664615) | 4.976842 / 5.269862 (-0.293020) | 3.020535 / 4.565676 (-1.545142) | 0.089640 / 0.424275 (-0.334635) | 0.009126 / 0.007607 (0.001519) | 0.682364 / 0.226044 (0.456319) | 6.840507 / 2.268929 (4.571579) | 3.313314 / 55.444624 (-52.131310) | 2.815313 / 6.876477 (-4.061164) | 2.851787 / 2.142072 (0.709715) | 1.044916 / 4.805227 (-3.760312) | 0.218346 / 6.500664 (-6.282318) | 0.075655 / 0.075469 (0.000186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.641767 / 1.841788 (-0.200020) | 24.618096 / 8.074308 (16.543788) | 21.557652 / 10.191392 (11.366260) | 0.211622 / 0.680424 (-0.468801) | 0.028775 / 0.534201 (-0.505426) | 0.480469 / 0.579283 (-0.098814) | 0.593311 / 0.434364 (0.158948) | 0.560620 / 0.540337 (0.020283) | 0.827026 / 1.386936 (-0.559910) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009347 / 0.011353 (-0.002006) | 0.005184 / 0.011008 (-0.005824) | 0.078878 / 0.038508 (0.040370) | 0.083067 / 0.023109 (0.059957) | 0.446591 / 0.275898 (0.170693) | 0.512934 / 0.323480 (0.189454) | 0.006614 / 0.007986 (-0.001372) | 0.004477 / 0.004328 (0.000148) | 0.087403 / 0.004250 (0.083153) | 0.060710 / 0.037052 (0.023658) | 0.451811 / 0.258489 (0.193322) | 0.482031 / 0.293841 (0.188190) | 0.051685 / 0.128546 (-0.076862) | 0.013436 / 0.075646 (-0.062210) | 0.109012 / 0.419271 (-0.310259) | 0.059654 / 0.043533 (0.016121) | 0.439041 / 0.255139 (0.183902) | 0.481708 / 0.283200 (0.198508) | 0.037393 / 0.141683 (-0.104290) | 1.761704 / 1.452155 (0.309549) | 1.946711 / 1.492716 (0.453995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287981 / 0.018006 (0.269975) | 0.610219 / 0.000490 (0.609729) | 0.006733 / 0.000200 (0.006533) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038999 / 0.037411 (0.001588) | 0.100613 / 0.014526 (0.086087) | 0.126445 / 0.176557 (-0.050111) | 0.187596 / 0.737135 (-0.549540) | 0.122130 / 0.296338 (-0.174208) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.647686 / 0.215209 (0.432477) | 6.176079 / 2.077655 (4.098424) | 2.800232 / 1.504120 (1.296112) | 2.434625 / 1.541195 (0.893430) | 2.460646 / 1.468490 (0.992155) | 0.923736 / 4.584777 (-3.661041) | 5.480197 / 3.745712 (1.734485) | 4.849250 / 5.269862 (-0.420612) | 3.031576 / 4.565676 (-1.534101) | 0.102525 / 0.424275 (-0.321750) | 0.008688 / 0.007607 (0.001081) | 0.766097 / 0.226044 (0.540052) | 7.626822 / 2.268929 (5.357893) | 3.719155 / 55.444624 (-51.725469) | 2.967121 / 6.876477 (-3.909356) | 3.182464 / 2.142072 (1.040392) | 1.018315 / 4.805227 (-3.786912) | 0.211300 / 6.500664 (-6.289364) | 0.083055 / 0.075469 (0.007586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.731619 / 1.841788 (-0.110168) | 25.315978 / 8.074308 (17.241669) | 22.736306 / 10.191392 (12.544914) | 0.270330 / 0.680424 (-0.410094) | 0.034790 / 0.534201 (-0.499411) | 0.488675 / 0.579283 (-0.090608) | 0.603426 / 0.434364 (0.169062) | 0.572547 / 0.540337 (0.032210) | 0.825719 / 1.386936 (-0.561217) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008992 / 0.011353 (-0.002360) | 0.005086 / 0.011008 (-0.005923) | 0.107400 / 0.038508 (0.068892) | 0.091894 / 0.023109 (0.068785) | 0.382347 / 0.275898 (0.106449) | 0.446581 / 0.323480 (0.123101) | 0.005179 / 0.007986 (-0.002807) | 0.006356 / 0.004328 (0.002028) | 0.084979 / 0.004250 (0.080729) | 0.060647 / 0.037052 (0.023594) | 0.385940 / 0.258489 (0.127451) | 0.444817 / 0.293841 (0.150976) | 0.049484 / 0.128546 (-0.079062) | 0.014173 / 0.075646 (-0.061473) | 0.345704 / 0.419271 (-0.073567) | 0.068082 / 0.043533 (0.024550) | 0.377170 / 0.255139 (0.122031) | 0.411816 / 0.283200 (0.128616) | 0.043049 / 0.141683 (-0.098633) | 1.681499 / 1.452155 (0.229344) | 1.805428 / 1.492716 (0.312712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.323170 / 0.018006 (0.305164) | 0.693845 / 0.000490 (0.693355) | 0.015499 / 0.000200 (0.015299) | 0.000603 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031629 / 0.037411 (-0.005783) | 0.093511 / 0.014526 (0.078985) | 0.112400 / 0.176557 (-0.064157) | 0.173731 / 0.737135 (-0.563405) | 0.116013 / 0.296338 (-0.180325) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.576724 / 0.215209 (0.361515) | 5.775055 / 2.077655 (3.697400) | 2.755869 / 1.504120 (1.251749) | 2.430253 / 1.541195 (0.889058) | 2.479629 / 1.468490 (1.011139) | 0.841472 / 4.584777 (-3.743305) | 5.120536 / 3.745712 (1.374824) | 4.813281 / 5.269862 (-0.456581) | 3.054617 / 4.565676 (-1.511059) | 0.091459 / 0.424275 (-0.332816) | 0.009072 / 0.007607 (0.001465) | 0.742674 / 0.226044 (0.516629) | 7.137861 / 2.268929 (4.868933) | 3.497568 / 55.444624 (-51.947056) | 2.814658 / 6.876477 (-4.061819) | 2.934415 / 2.142072 (0.792343) | 0.970855 / 4.805227 (-3.834372) | 0.213366 / 6.500664 (-6.287299) | 0.078763 / 0.075469 (0.003293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.584716 / 1.841788 (-0.257072) | 24.098173 / 8.074308 (16.023865) | 20.746352 / 10.191392 (10.554960) | 0.215313 / 0.680424 (-0.465111) | 0.029538 / 0.534201 (-0.504663) | 0.448672 / 0.579283 (-0.130611) | 0.580023 / 0.434364 (0.145659) | 0.537867 / 0.540337 (-0.002471) | 0.804622 / 1.386936 (-0.582314) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008965 / 0.011353 (-0.002388) | 0.005544 / 0.011008 (-0.005464) | 0.076806 / 0.038508 (0.038298) | 0.085333 / 0.023109 (0.062224) | 0.509974 / 0.275898 (0.234076) | 0.511548 / 0.323480 (0.188068) | 0.007136 / 0.007986 (-0.000849) | 0.004491 / 0.004328 (0.000163) | 0.086687 / 0.004250 (0.082437) | 0.066539 / 0.037052 (0.029486) | 0.483663 / 0.258489 (0.225174) | 0.529480 / 0.293841 (0.235639) | 0.046296 / 0.128546 (-0.082250) | 0.014736 / 0.075646 (-0.060910) | 0.088261 / 0.419271 (-0.331010) | 0.056753 / 0.043533 (0.013220) | 0.511698 / 0.255139 (0.256559) | 0.497956 / 0.283200 (0.214756) | 0.034753 / 0.141683 (-0.106930) | 1.828354 / 1.452155 (0.376199) | 1.799211 / 1.492716 (0.306494) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.389652 / 0.018006 (0.371645) | 0.602522 / 0.000490 (0.602033) | 0.068363 / 0.000200 (0.068163) | 0.000493 / 0.000054 (0.000439) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036431 / 0.037411 (-0.000980) | 0.102162 / 0.014526 (0.087636) | 0.122466 / 0.176557 (-0.054091) | 0.181001 / 0.737135 (-0.556134) | 0.125743 / 0.296338 (-0.170596) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583847 / 0.215209 (0.368638) | 5.913008 / 2.077655 (3.835354) | 2.716088 / 1.504120 (1.211968) | 2.328631 / 1.541195 (0.787437) | 2.459953 / 1.468490 (0.991463) | 0.792829 / 4.584777 (-3.791948) | 5.183965 / 3.745712 (1.438253) | 4.508264 / 5.269862 (-0.761598) | 2.855444 / 4.565676 (-1.710232) | 0.090704 / 0.424275 (-0.333571) | 0.009303 / 0.007607 (0.001696) | 0.694303 / 0.226044 (0.468258) | 6.951876 / 2.268929 (4.682947) | 3.418244 / 55.444624 (-52.026381) | 2.799743 / 6.876477 (-4.076734) | 3.043657 / 2.142072 (0.901584) | 0.921537 / 4.805227 (-3.883691) | 0.191774 / 6.500664 (-6.308890) | 0.068602 / 0.075469 (-0.006867) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624842 / 1.841788 (-0.216946) | 24.570622 / 8.074308 (16.496314) | 21.207566 / 10.191392 (11.016174) | 0.217734 / 0.680424 (-0.462689) | 0.033109 / 0.534201 (-0.501091) | 0.451651 / 0.579283 (-0.127632) | 0.590890 / 0.434364 (0.156526) | 0.546195 / 0.540337 (0.005858) | 0.730298 / 1.386936 (-0.656638) |\n\n</details>\n</details>\n\n\n"
] | 2023-11-02T09:48:58Z
| 2023-11-02T10:22:33Z
| 2023-11-02T10:11:19Z
|
MEMBER
| null | null | null |
Temporarily pin `pyarrow` < 14.0.0 until permanent solution is found.
Hot fix #6374.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6375/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6375/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6375.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6375",
"merged_at": "2023-11-02T10:11:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6375.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6375"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4538
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4538/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4538/events
|
https://github.com/huggingface/datasets/issues/4538
| 1,279,409,786
|
I_kwDODunzps5MQj56
| 4,538
|
Dataset Viewer issue for Pile of Law
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Breakend",
"id": 1609857,
"login": "Breakend",
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"repos_url": "https://api.github.com/users/Breakend/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Breakend",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
] | null |
[
"Hi @Breakend, yes β we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!",
"Awesome! Thanks for confirming. cc @severo ",
"Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture dβeΜcran 2022-06-27 aΜ 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture dβeΜcran 2022-06-27 aΜ 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture dβeΜcran 2022-06-27 aΜ 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n"
] | 2022-06-22T02:48:40Z
| 2022-06-27T07:30:23Z
| 2022-06-26T22:26:22Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?
Thanks so much!
### Owner
Yes
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Breakend",
"id": 1609857,
"login": "Breakend",
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"repos_url": "https://api.github.com/users/Breakend/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Breakend",
"user_view_type": "public"
}
|
{
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4538/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4572/events
|
https://github.com/huggingface/datasets/issues/4572
| 1,285,022,499
|
I_kwDODunzps5Ml-Mj
| 4,572
|
Dataset Viewer issue for mlsum
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
}
|
[
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] | 2022-06-26T20:24:17Z
| 2022-07-21T12:40:01Z
| 2022-07-21T12:40:01Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4572/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6622
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6622/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6622/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6622/events
|
https://github.com/huggingface/datasets/issues/6622
| 2,103,780,697
|
I_kwDODunzps59ZSVZ
| 6,622
|
multi-GPU map does not work
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4",
"events_url": "https://api.github.com/users/kopyl/events{/privacy}",
"followers_url": "https://api.github.com/users/kopyl/followers",
"following_url": "https://api.github.com/users/kopyl/following{/other_user}",
"gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kopyl",
"id": 17604849,
"login": "kopyl",
"node_id": "MDQ6VXNlcjE3NjA0ODQ5",
"organizations_url": "https://api.github.com/users/kopyl/orgs",
"received_events_url": "https://api.github.com/users/kopyl/received_events",
"repos_url": "https://api.github.com/users/kopyl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kopyl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kopyl",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)"
] | 2024-01-27T20:06:08Z
| 2024-02-08T11:18:21Z
| 2024-02-08T11:18:21Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y
Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy
Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-minute video than explain here):
https://youtu.be/RNbdPkSppc4
### Steps to reproduce the bug
-
### Expected behavior
-
### Environment info
x2 RTX A4000
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6622/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6622/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/4593
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4593/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4593/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4593/events
|
https://github.com/huggingface/datasets/pull/4593
| 1,288,067,699
|
PR_kwDODunzps46iIkn
| 4,593
|
Fix error message when using load_from_disk to load DatasetDict
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dvsth",
"id": 11157811,
"login": "dvsth",
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"repos_url": "https://api.github.com/users/dvsth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dvsth",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-06-29T01:34:27Z
| 2022-06-29T04:01:59Z
| 2022-06-29T04:01:39Z
|
NONE
| null | null | null |
Issue #4594
Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error.
Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`.
Changes: Change the suggestion to say "Please use `datasets.dataset_dict.load_from_disk` instead."
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4",
"events_url": "https://api.github.com/users/dvsth/events{/privacy}",
"followers_url": "https://api.github.com/users/dvsth/followers",
"following_url": "https://api.github.com/users/dvsth/following{/other_user}",
"gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dvsth",
"id": 11157811,
"login": "dvsth",
"node_id": "MDQ6VXNlcjExMTU3ODEx",
"organizations_url": "https://api.github.com/users/dvsth/orgs",
"received_events_url": "https://api.github.com/users/dvsth/received_events",
"repos_url": "https://api.github.com/users/dvsth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dvsth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dvsth",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4593/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4593/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4593.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4593",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4593.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4593"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5080
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5080/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5080/events
|
https://github.com/huggingface/datasets/issues/5080
| 1,398,849,565
|
I_kwDODunzps5TYMAd
| 5,080
|
Use hfh for caching
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"There is some discussion in https://github.com/huggingface/huggingface_hub/pull/1088 if it can help :)"
] | 2022-10-06T05:51:58Z
| 2022-10-06T14:26:05Z
| null |
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Is your feature request related to a problem?
As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching.
## Describe the solution you'd like
Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages.
First, we could easily start using `hfh` caching for:
- dataset Python scripts
- dataset READMEs
- dataset infos JSON files (now deprecated)
Second, we could also use `hfh` caching for data files downloaded from the Hub.
Further investigation is needed for:
- files downloaded from non-Hub hosts
- extracted files from downloaded archive/compressed files
- generated Arrow files
## Additional context
Docs about the `hfh` caching system:
- [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache)
- [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache)
The `transformers` library has already adopted `hfh` for caching. See:
- huggingface/transformers#18438
- huggingface/transformers#18857
- huggingface/transformers#18966
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5080/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6545
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6545/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6545/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6545/events
|
https://github.com/huggingface/datasets/issues/6545
| 2,060,789,507
|
I_kwDODunzps561ScD
| 6,545
|
`image` column not automatically inferred if image dataset only contains 1 image
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/788417?v=4",
"events_url": "https://api.github.com/users/apolinario/events{/privacy}",
"followers_url": "https://api.github.com/users/apolinario/followers",
"following_url": "https://api.github.com/users/apolinario/following{/other_user}",
"gists_url": "https://api.github.com/users/apolinario/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/apolinario",
"id": 788417,
"login": "apolinario",
"node_id": "MDQ6VXNlcjc4ODQxNw==",
"organizations_url": "https://api.github.com/users/apolinario/orgs",
"received_events_url": "https://api.github.com/users/apolinario/received_events",
"repos_url": "https://api.github.com/users/apolinario/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/apolinario/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apolinario/subscriptions",
"type": "User",
"url": "https://api.github.com/users/apolinario",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2023-12-30T16:17:29Z
| 2024-01-09T13:06:31Z
| 2024-01-09T13:06:31Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset.
However, if the dataset contains only 1 image, this does not take place
### Steps to reproduce the bug
Input
(dataset with one image `multimodalart/repro_1_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_1_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['file_name', 'prompt'],
num_rows: 1
})
})
```
Input
(dataset with 2+ images `multimodalart/repro_2_image`)
```py
from datasets import load_dataset
dataset = load_dataset("multimodalart/repro_2_image")
dataset
```
Output:
```py
DatasetDict({
train: Dataset({
features: ['image', 'prompt'],
num_rows: 2
})
})
```
### Expected behavior
Expected to map `file_name` β `image` for all dataset sizes, including 1.
### Environment info
Both latest main and 2.16.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6545/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6545/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5892
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5892/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5892/events
|
https://github.com/huggingface/datasets/issues/5892
| 1,722,503,824
|
I_kwDODunzps5mq1KQ
| 5,892
|
User access requests with manual review do not notify the dataset owner
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leondz",
"id": 121934,
"login": "leondz",
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"repos_url": "https://api.github.com/users/leondz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leondz",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc @SBrandeis",
"I think this has been addressed.\r\n\r\nPlease open a new issue if you are still not getting notified."
] | 2023-05-23T17:27:46Z
| 2023-07-21T13:55:37Z
| 2023-07-21T13:55:36Z
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane.
### Steps to reproduce the bug
1. Enable a dataset's user access requests
2. Set to Manual Review
3. Ask another HF user to request access to the dataset
4. Dataset owner is not notified
### Expected behavior
The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled.
### Environment info
n/a
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5892/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5706
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5706/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5706/events
|
https://github.com/huggingface/datasets/issues/5706
| 1,653,545,835
|
I_kwDODunzps5ijxtr
| 5,706
|
Support categorical data types for Parquet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kklemon",
"id": 1430243,
"login": "kklemon",
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"repos_url": "https://api.github.com/users/kklemon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kklemon",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"events_url": "https://api.github.com/users/mhattingpete/events{/privacy}",
"followers_url": "https://api.github.com/users/mhattingpete/followers",
"following_url": "https://api.github.com/users/mhattingpete/following{/other_user}",
"gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mhattingpete",
"id": 22622299,
"login": "mhattingpete",
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"organizations_url": "https://api.github.com/users/mhattingpete/orgs",
"received_events_url": "https://api.github.com/users/mhattingpete/received_events",
"repos_url": "https://api.github.com/users/mhattingpete/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mhattingpete",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"events_url": "https://api.github.com/users/mhattingpete/events{/privacy}",
"followers_url": "https://api.github.com/users/mhattingpete/followers",
"following_url": "https://api.github.com/users/mhattingpete/following{/other_user}",
"gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mhattingpete",
"id": 22622299,
"login": "mhattingpete",
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"organizations_url": "https://api.github.com/users/mhattingpete/orgs",
"received_events_url": "https://api.github.com/users/mhattingpete/received_events",
"repos_url": "https://api.github.com/users/mhattingpete/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mhattingpete",
"user_view_type": "public"
}
] | null |
[
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do",
"@kklemon did you implement this? Otherwise I would like to give it a try",
"@mhattingpete no, I hadn't time for this so far. Feel free to work on this.",
"#self-assign",
"This would be super useful, so +1. \r\n\r\nAlso, these prior issues/PRs seem relevant: \r\nhttps://github.com/huggingface/datasets/issues/1906\r\nhttps://github.com/huggingface/datasets/pull/1936",
"Hi, this is a really useful feature, has this been implemented yet? ",
"Hey folks -- I'm thinking about trying a PR for this. As far as I can tell the only sticky point is that auto-generation of features from a pyarrow schema will fail under the current `generate_from_arrow_type` function because there is no encoding of the categorical string label -> int map in the pa.dictionary type itself; that is stored with the full array. \r\n\r\nI see two ways to solve this. Option 1 is to require datasets with categorical types to use pyarrow schema metadata to encode the entire HF feature dictionary, that way categorical types don't ever need to be inferred from the pa type alone. The downside to this is that it means that these datasets will be a bit brittle, as if the feature encoding API ever changes, they will suddenly be unloadable. \r\n\r\nThe other option is to modify `generate_from_arrow_type` to take per-field metadata, and include just that metadata (the category labels) in the schema metadata. \r\n\r\nDoes anyone at HF have any preference on these two (or alternate) approaches?",
"Maybe we don't need to store the string label -> int map in the categorical for the corresponding `datasets` feature ?",
"I think that does need to be stored in the Feature object. Similar to how\r\n`ClassLabel` needs the class names for some of the provided default\r\nfunctionality (e.g., encoding or decoding values) here, a categorical\r\nfeature needs the same. Without storing that information, would you suggest\r\nthat categorical features just be stored internally as integer arrays?\r\n\r\nOn Fri, Sep 8, 2023, 5:37β―AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Maybe we don't need to store the string label -> int map in the\r\n> categorical for the corresponding datasets feature ?\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711375652>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5XZV3RA4GBRVBLJN72LXZLROZANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Well IIRC you can concatenate two Arrow arrays with different dictionaries together. But for `datasets` would mean updating the `datasets` features when concatenating two arrays of the same type, which is not supported right now. That's why if there is a way to have it without storing the mapping in the feature object it would be nice.\r\n\r\nFor decoding we do have the string<->integer mapping from the array `dictionary` attribute so we're fine. For encoding I think it can work if we only encode when converting python objects to pyarrow in `TypedSequence.__arrow_array__` in `arow_writer.py`. It can work by converting the python objects to a pyarrow array and then use the `dictionary_encode` method.\r\n\r\nAnother concern about concatenation: I noticed **pyarrow creates the new dictionary and indices in memory** when concatenating two dictionary encoded arrays. This can be a problem for big datastets, and we should probably use ChunkedArray objects instead. This can surely be taken care of in `array_concat` in `table.py`\r\n\r\ncc @mariosasko in case you have other ideas\r\n\r\n",
"Hmm, that is a good point. What if we implemented this feature first in a\r\nmanner that didn't allow concatenation of arrays with different index to\r\ncategory maps? Then concatenation would be very straightforward, and I\r\nthink this is reasonable if the index to category map is stored in the\r\nschema as well. Obviously, this is limited in how folks could use the\r\nfeature, but they can always fall back to raw strings if needed, and as\r\nusage increases we'll have more data to see what the right solution here\r\nis.\r\n\r\nOn Fri, Sep 8, 2023, 6:49β―AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> Well IIRC you can concatenate two Arrow arrays with different dictionaries\r\n> together. But for datasets would mean updating the datasets features when\r\n> concatenating two arrays of the same type, which is not supported right\r\n> now. That's why if there is a way to have it without storing the mapping in\r\n> the feature object it would be nice.\r\n>\r\n> For decoding we do have the string<->integer mapping from the array\r\n> dictionary attribute so we're fine. For encoding I think it can work if\r\n> we only encode when converting python objects to pyarrow in\r\n> TypedSequence.__arrow_array__ in arow_writer.py. It can work by\r\n> converting the python objects to a pyarrow array and then use the\r\n> dictionary_encode method.\r\n>\r\n> Another concern about concatenation: I noticed *pyarrow creates the new\r\n> dictionary and indices in memory* when concatenating two dictionary\r\n> encoded arrays. This can be a problem for big datastets, and we should\r\n> probably use ChunkedArray objects instead. This can surely be taken care of\r\n> in array_concat in table.py\r\n>\r\n> cc @mariosasko <https://github.com/mariosasko> in case you have other\r\n> ideas\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1711468806>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X4E2KC2IXLDPYR3XZLXZLZ2FANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@lhoestq @mariosasko just re-pinging on this so I can push forward further here. What are your thoughts on disallowing concatenation of categorical arrays for now such that the index to category map can be stored in the schema metadata? And/or other approaches that should be taken here?\r\n",
"I think the easiest for now would be to add a `dictionary_decode` argument to the parquet loaders that would convert the dictionary type back to strings when set to `True`, and make `dictionary_decode=False` raise `NotImplementedError` for now if there are dictionary type columns. Would that be ok as a first step ?",
"I mean, that would certainly be easiest but I don't think it really solves this issue in a meaningful way. This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types. Given that those savings are what is of real interest here, I think keeping it explicit that it is not supported (and forcing the user to do the conversion) might actually be better that way this problem stays top of mind.\r\n\r\nIs there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?",
"> This just changes the burden from string conversion from the user to HF Datasets, but doesn't actually enable HF Datasets to take advantage of the (very significant) storage and associated speed/memory savings offered by using categorical types.\r\n\r\nThere's already a ClassLabel type that does pretty much the same thing (store as integer instead of string) if it can help\r\n\r\n> Is there an objection with supporting categorical types explicitly through the medium I outlined above, where the error is raised if you try to concat two differently typed categorical columns?\r\n\r\nYea we do concatenation quite often (e.g. in `map`) so I don't think this is a viable option",
"But how often in the cases where concatenation is done now would the\r\ncategorical label vocabulary actually change? I think it would be in\r\nbasically none of them. And in such cases, concatenation remains very easy,\r\nno?\r\n\r\nOn Fri, Sep 22, 2023, 12:02β―PM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> This just changes the burden from string conversion from the user to HF\r\n> Datasets, but doesn't actually enable HF Datasets to take advantage of the\r\n> (very significant) storage and associated speed/memory savings offered by\r\n> using categorical types.\r\n>\r\n> There's already a ClassLabel type that does pretty much the same thing\r\n> (store as integer instead of string) if it can help\r\n>\r\n> Is there an objection with supporting categorical types explicitly through\r\n> the medium I outlined above, where the error is raised if you try to concat\r\n> two differently typed categorical columns?\r\n>\r\n> Yea we do concatenation quite often (e.g. in map) so I don't think this\r\n> is a viable option\r\n>\r\n> β\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/5706#issuecomment-1731667012>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AADS5X5CGWFXDCML6UKCWYLX3WZBXANCNFSM6AAAAAAWSOUTJ4>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"Arrow IPC seems to require unified dictionaries anyway so actually we could surely focus only on this use case indeed @mmcdermott \r\n\r\nSo defining a new Feature type in `datasets` that contains the dictionary mapping should be fine (and concatenation would work out of the box), and it should also take care of checking that the data it encodes/decodes has the right dictionary. Do you think it can be done without impacting iterating speed for the other types @mariosasko ?\r\n\r\nRight now we have little bandwidth to work in this kind of things though"
] | 2023-04-04T09:45:35Z
| 2024-06-07T12:20:43Z
| 2024-06-07T12:20:43Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns:
```python
import pandas as pd
import pyarrow.parquet as pq
from datasets import load_dataset
# Create categorical sample DataFrame
df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category')
df.to_parquet('data.parquet')
# Read back as pyarrow table
table = pq.read_table('data.parquet')
print(table.schema)
# type: dictionary<values=string, indices=int32, ordered=0>
# Load with huggingface datasets
load_dataset('parquet', data_files='data.parquet')
```
Error:
```
Traceback (most recent call last):
File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single
writer.write_table(table)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
NotImplementedError
```
### Motivation
Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature.
### Your contribution
I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5706/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6305
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6305/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6305/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6305/events
|
https://github.com/huggingface/datasets/issues/6305
| 1,946,010,912
|
I_kwDODunzps5z_cUg
| 6,305
|
Cannot load dataset with `2.14.5`: `FileNotFound` error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/finiteautomata",
"id": 167943,
"login": "finiteautomata",
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/finiteautomata",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @finiteautomata.\r\n\r\nWe are investigating it. ",
"There is a bug in `datasets`. You can see our proposed fix:\r\n- #6309 "
] | 2023-10-16T20:11:27Z
| 2023-10-18T13:50:36Z
| 2023-10-18T13:50:36Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`.
### Steps to reproduce the bug
[Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg)
```python
Downloading readme: 100%
1.19k/1.19k [00:00<00:00, 30.9kB/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-2-807c3583d297>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 load_dataset("piuba-bigdata/articles_and_comments", split="train")
2 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2127
2128 # Create a dataset builder
-> 2129 builder_instance = load_dataset_builder(
2130 path=path,
2131 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs)
1813 download_config = download_config.copy() if download_config else DownloadConfig()
1814 download_config.storage_options.update(storage_options)
-> 1815 dataset_module = dataset_module_factory(
1816 path,
1817 revision=revision,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1506 raise e1 from None
1507 if isinstance(e1, FileNotFoundError):
-> 1508 raise FileNotFoundError(
1509 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1510 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/piuba-bigdata/articles_and_comments/articles_and_comments.py or any data file in the same directory. Couldn't find 'piuba-bigdata/articles_and_comments' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in piuba-bigdata/articles_and_comments.
```
### Expected behavior
It should load normally.
### Environment info
```
- `datasets` version: 2.14.5
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.18.0
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6305/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6305/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6759
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6759/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6759/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6759/events
|
https://github.com/huggingface/datasets/issues/6759
| 2,208,892,891
|
I_kwDODunzps6DqQfb
| 6,759
|
Persistent multi-process Pool
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2024-03-26T17:35:25Z
| 2024-03-26T17:35:25Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool.
2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management.
### Motivation
Is really slow to iteratively perform map and filter operations on a dataset.
### Your contribution
If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6759/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6759/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/7388
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7388/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7388/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7388/events
|
https://github.com/huggingface/datasets/issues/7388
| 2,843,188,499
|
I_kwDODunzps6pd50T
| 7,388
|
OSError: [Errno 22] Invalid argument forbidden character
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/124634542?v=4",
"events_url": "https://api.github.com/users/langflogit/events{/privacy}",
"followers_url": "https://api.github.com/users/langflogit/followers",
"following_url": "https://api.github.com/users/langflogit/following{/other_user}",
"gists_url": "https://api.github.com/users/langflogit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/langflogit",
"id": 124634542,
"login": "langflogit",
"node_id": "U_kgDOB23Frg",
"organizations_url": "https://api.github.com/users/langflogit/orgs",
"received_events_url": "https://api.github.com/users/langflogit/received_events",
"repos_url": "https://api.github.com/users/langflogit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/langflogit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langflogit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/langflogit",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"You can probably copy the dataset in your HF account and rename the files (without having to download them to your disk). Or alternatively feel free to open a Pull Request to this dataset with the renamed file",
"Thank you, that will help me work around this problem"
] | 2025-02-10T17:46:31Z
| 2025-02-11T13:42:32Z
| 2025-02-11T13:42:30Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I'm on Windows and i'm trying to load a datasets but i'm having title error because files in the repository are named with charactere like < >which can't be in a name file. Could it be possible to load this datasets but removing those charactere ?
### Steps to reproduce the bug
load_dataset("CATMuS/medieval") on Windows
### Expected behavior
Making the function to erase the forbidden character to allow loading the datasets who have those characters.
### Environment info
- `datasets` version: 3.2.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.12.2
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/124634542?v=4",
"events_url": "https://api.github.com/users/langflogit/events{/privacy}",
"followers_url": "https://api.github.com/users/langflogit/followers",
"following_url": "https://api.github.com/users/langflogit/following{/other_user}",
"gists_url": "https://api.github.com/users/langflogit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/langflogit",
"id": 124634542,
"login": "langflogit",
"node_id": "U_kgDOB23Frg",
"organizations_url": "https://api.github.com/users/langflogit/orgs",
"received_events_url": "https://api.github.com/users/langflogit/received_events",
"repos_url": "https://api.github.com/users/langflogit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/langflogit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/langflogit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/langflogit",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7388/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7388/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5627
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5627/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5627/events
|
https://github.com/huggingface/datasets/issues/5627
| 1,619,336,609
|
I_kwDODunzps5ghR2h
| 5,627
|
Unable to load AutoTrain-generated dataset from the hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8560151?v=4",
"events_url": "https://api.github.com/users/ijmiller2/events{/privacy}",
"followers_url": "https://api.github.com/users/ijmiller2/followers",
"following_url": "https://api.github.com/users/ijmiller2/following{/other_user}",
"gists_url": "https://api.github.com/users/ijmiller2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ijmiller2",
"id": 8560151,
"login": "ijmiller2",
"node_id": "MDQ6VXNlcjg1NjAxNTE=",
"organizations_url": "https://api.github.com/users/ijmiller2/orgs",
"received_events_url": "https://api.github.com/users/ijmiller2/received_events",
"repos_url": "https://api.github.com/users/ijmiller2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ijmiller2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ijmiller2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ijmiller2",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder",
"Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I canβt find any way of\npulling out file indices or file names from the autogenerated split. The\nfile names field of the image dataset (loaded directly from arrow file) is\nmissing, just fyi (for anyone else this might be relevant too).\n\nOn Fri, Mar 10, 2023 at 7:02 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The AutoTrain format is not supported right now. I think it would require\n> a dedicated dataset builder\n>\n> β\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/5627#issuecomment-1464734308>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ACBJ4F5A353MCZ76OGRJ6CTW3PFI7ANCNFSM6AAAAAAVWXNUTE>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] | 2023-03-10T17:25:58Z
| 2023-03-11T15:44:42Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
```
### Steps to reproduce the bug
Steps to reproduce:
1. `pip install datasets==2.10.1`
2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login`
```
from datasets import load_dataset
# load dataset
dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
dataset = load_dataset(dataset)
```
Here's the full traceback:
```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 2383.80it/s]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 505.95it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 writer = writer_class(
1869 features=writer._features,
1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1871 storage_options=self._fs.storage_options,
1872 embed_local_files=embed_local_files,
1873 )
-> 1874 writer.write_table(table)
1875 num_examples_progress_update += len(table)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
567 pa_table = pa_table.combine_chunks()
--> 568 pa_table = table_cast(pa_table, self._schema)
569 if self.embed_local_files:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema)
2311 if table.schema != schema:
-> 2312 return cast_table_to_schema(table, schema)
2313 elif table.schema.metadata != schema.metadata:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema)
2269 if sorted(table.column_names) != sorted(features):
-> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Input In [8], in <cell line: 6>()
4 # load dataset
5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
----> 6 dataset = load_dataset(dataset)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1781 # Download and prepare data
-> 1782 builder_instance.download_and_prepare(
1783 download_config=download_config,
1784 download_mode=download_mode,
1785 verification_mode=verification_mode,
1786 try_from_hf_gcs=try_from_hf_gcs,
1787 num_proc=num_proc,
1788 )
1790 # Build dataset for splits
1791 keep_in_memory = (
1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1793 )
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
870 if num_proc is not None:
871 prepare_split_kwargs["num_proc"] = num_proc
--> 872 self._download_and_prepare(
873 dl_manager=dl_manager,
874 verification_mode=verification_mode,
875 **prepare_split_kwargs,
876 **download_and_prepare_kwargs,
877 )
878 # Sync info
879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
963 split_dict.add(split_generator.split_info)
965 try:
966 # Prepare split will record examples associated to the split
--> 967 self._prepare_split(split_generator, **prepare_split_kwargs)
968 except OSError as e:
969 raise OSError(
970 "Cannot find data file. "
971 + (self.manual_download_instructions or "")
972 + "\nOriginal error:\n"
973 + str(e)
974 ) from None
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1747 job_id = 0
1748 with pbar:
-> 1749 for job_id, done, content in self._prepare_split_single(
1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1751 ):
1752 if done:
1753 result = content
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub.
I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub):
```python
dataset = load_dataset(
"lhoestq/custom_squad",
revision="main" # tag name, or branch name, or commit hash
)
```
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5627/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5926/events
|
https://github.com/huggingface/datasets/issues/5926
| 1,743,922,028
|
I_kwDODunzps5n8iNs
| 5,926
|
Uncaught exception when generating the splits from a dataset that miss data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
}
|
[] |
open
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[
"Thanks for reporting, @severo.\r\n\r\nThis is a known issue with `fsspec`:\r\n- #5862\r\n- https://github.com/fsspec/filesystem_spec/issues/1265"
] | 2023-06-06T13:51:01Z
| 2023-06-07T07:53:16Z
| null |
COLLABORATOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Dataset https://huggingface.co/datasets/blog_authorship_corpus has an issue with its hosting platform, since https://drive.google.com/u/0/uc?id=1cGy4RNDV87ZHEXbiozABr9gsSrZpPaPz&export=download returns 404 error.
But when trying to generate the split names, we get an exception which is now correctly caught.
Seen originally in https://github.com/huggingface/datasets-server/blob/adbdcd6710ffed4e2eb2e4cd905b5e0dff530a15/services/worker/src/worker/job_runners/config/parquet_and_info.py#L435
### Steps to reproduce the bug
```python
>>> from datasets import StreamingDownloadManager, load_dataset_builder
>>> builder = load_dataset_builder(path="blog_authorship_corpus")
Downloading builder script: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.60k/5.60k [00:00<00:00, 23.1MB/s]
Downloading metadata: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2.81k/2.81k [00:00<00:00, 14.7MB/s]
Downloading readme: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.30k/7.30k [00:00<00:00, 30.8MB/s]
>>> dl_manager = StreamingDownloadManager(base_path=builder.base_path)
>>> builder._split_generators(dl_manager)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/blog_authorship_corpus/6f5d78241afd8313111956f877a57db7a0e9fc6718255dc85df0928197feb683/blog_authorship_corpus.py", line 79, in _split_generators
data = dl_manager.download_and_extract(_DATA_URL)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1087, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1039, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 435, in map_nested
return function(data_struct)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 1044, in _extract
protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 433, in _get_extraction_protocol
with fsspec.open(urlpath, **kwargs) as f:
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 439, in open
return open_files(
File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 194, in __getitem__
out = super().__getitem__(item)
IndexError: list index out of range
```
### Expected behavior
We should have an Exception raised by the datasets library.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.19.0-1026-aws-x86_64-with-glibc2.35
- Python version: 3.9.15
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5926/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5926/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5122
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5122/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5122/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5122/events
|
https://github.com/huggingface/datasets/pull/5122
| 1,410,732,403
|
PR_kwDODunzps5A4rWn
| 5,122
|
Add warning
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34204311?v=4",
"events_url": "https://api.github.com/users/Salehbigdeli/events{/privacy}",
"followers_url": "https://api.github.com/users/Salehbigdeli/followers",
"following_url": "https://api.github.com/users/Salehbigdeli/following{/other_user}",
"gists_url": "https://api.github.com/users/Salehbigdeli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Salehbigdeli",
"id": 34204311,
"login": "Salehbigdeli",
"node_id": "MDQ6VXNlcjM0MjA0MzEx",
"organizations_url": "https://api.github.com/users/Salehbigdeli/orgs",
"received_events_url": "https://api.github.com/users/Salehbigdeli/received_events",
"repos_url": "https://api.github.com/users/Salehbigdeli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Salehbigdeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Salehbigdeli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Salehbigdeli",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"As mentioned in https://github.com/huggingface/datasets/issues/5105 I think we just need to keep the existing files instead of deleting them.\r\nThe `dataset_info.json` file contains the split names anyway, so we know which files belong to the dataset, and which ones don't."
] | 2022-10-17T01:30:37Z
| 2022-11-05T12:23:53Z
| 2022-11-05T12:23:53Z
|
NONE
| null | null | null |
Fixes: #5105
I think removing the directory with warning is a better solution for this issue. Because if we decide to keep existing files in directory, then we should deal with the case providing same directory for several datasets! Which we know is not possible since `dataset_info.json` exists in that directory.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/34204311?v=4",
"events_url": "https://api.github.com/users/Salehbigdeli/events{/privacy}",
"followers_url": "https://api.github.com/users/Salehbigdeli/followers",
"following_url": "https://api.github.com/users/Salehbigdeli/following{/other_user}",
"gists_url": "https://api.github.com/users/Salehbigdeli/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Salehbigdeli",
"id": 34204311,
"login": "Salehbigdeli",
"node_id": "MDQ6VXNlcjM0MjA0MzEx",
"organizations_url": "https://api.github.com/users/Salehbigdeli/orgs",
"received_events_url": "https://api.github.com/users/Salehbigdeli/received_events",
"repos_url": "https://api.github.com/users/Salehbigdeli/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Salehbigdeli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Salehbigdeli/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Salehbigdeli",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5122/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5122/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5122",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5122"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6869/events
|
https://github.com/huggingface/datasets/issues/6869
| 2,280,048,297
|
I_kwDODunzps6H5sap
| 6,869
|
Download is broken for dict of dicts: FileNotFoundError
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2024-05-06T05:13:36Z
| 2024-05-06T09:25:53Z
| 2024-05-06T09:25:53Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
```
Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-7-0e0d76d25b09> in <module>
----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
.../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls)
255 start_time = datetime.now()
256 with stack_multiprocessing_download_progress_bars():
--> 257 downloaded_path_or_paths = map_nested(
258 download_func,
259 url_or_urls,
.../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc)
506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1)
507 iterable = list(iter_batched(iterable, batch_size))
--> 508 mapped = [
509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
507 iterable = list(iter_batched(iterable, batch_size))
508 mapped = [
--> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
511 ]
.../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config)
311 )
312 else:
--> 313 return [
314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
.../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0)
312 else:
313 return [
--> 314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
316 ]
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config)
321 # append the relative path to the base_path
322 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 323 out = cached_path(url_or_filename, download_config=download_config)
324 out = tracked_str(out)
325 out.set_origin(url_or_filename)
.../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
220 elif is_local_path(url_or_filename):
221 # File, but it doesn't exist.
--> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
223 else:
224 # Something unknown
FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist
```
Related to:
- #6850
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6869/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6869/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/7032
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7032/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7032/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7032/events
|
https://github.com/huggingface/datasets/pull/7032
| 2,395,531,699
|
PR_kwDODunzps50sJTq
| 7,032
|
Register `.zstd` extension for zstd-compressed files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7032). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova hm I don't know tbh, it's just that \"mlfoundations/dclm-baseline-1.0\" dataset contains [files](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0/tree/main/global-shard_01_of_10/local-shard_0_of_10) with this extension and these files seem to be valid ",
"not sure why CI is failing but seems to be unrelated to this pr? can I merge @lhoestq @albertvillanova ?",
"yes you can merge, the CI failure is unrelated (surely an issue with hub-ci)",
"ah why not, you could try opening a PR\r\n\r\nbtw there is a channel with them at (internal) https://app.slack.com/client/T1RCG4490/C079AKTV11P if you want to let them know",
"@lhoestq, your previous comment was addressed to me or Polina?\r\n\r\n@polinaeterna let me know if it is OK for you.",
"I opened https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0/discussions/7",
"Should we close this PR then?"
] | 2024-07-08T12:39:50Z
| 2024-07-12T15:07:03Z
| 2024-07-12T15:07:03Z
|
CONTRIBUTOR
| null | null | null |
For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7032/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7032/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7032.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7032",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7032.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7032"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6721
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6721/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6721/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6721/events
|
https://github.com/huggingface/datasets/issues/6721
| 2,173,931,714
|
I_kwDODunzps6Bk5DC
| 6,721
|
Hi,do you know how to load the dataset from local file now?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50232044?v=4",
"events_url": "https://api.github.com/users/Gera001/events{/privacy}",
"followers_url": "https://api.github.com/users/Gera001/followers",
"following_url": "https://api.github.com/users/Gera001/following{/other_user}",
"gists_url": "https://api.github.com/users/Gera001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gera001",
"id": 50232044,
"login": "Gera001",
"node_id": "MDQ6VXNlcjUwMjMyMDQ0",
"organizations_url": "https://api.github.com/users/Gera001/orgs",
"received_events_url": "https://api.github.com/users/Gera001/received_events",
"repos_url": "https://api.github.com/users/Gera001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gera001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gera001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gera001",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"\r\n@Gera001\r\n# Loading Dataset from Local Files Using π€Hugging Face.\r\n\r\nTo load a dataset from local files using the Hugging Face datasets library, you can use the `load_dataset` function.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={'train': 'path/to/train.csv',\r\n 'test': 'path/to/test.csv'})\r\n```\r\n\r\nReference to [HF Datasets docs for loading from local](https://huggingface.co/docs/datasets/en/loading#csv). \r\n\r\n@albertvillanova\r\nthis issue can be closed here.",
"like this: from datasets import load_from_disk\r\ndataset = load_from_disk(data_path)\r\n",
"@ge00009 \r\n> like this: from datasets import load_from_disk dataset = load_from_disk(data_path)\r\n\r\nLoads a dataset that was previously saved using `save_to_disk()`.\r\n\r\nReference link:\r\nhttps://huggingface.co/docs/datasets/en/package_reference/loading_methods#datasets.load_from_disk.example"
] | 2024-03-07T13:58:40Z
| 2024-03-31T08:09:25Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6721/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6721/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5741
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5741/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5741/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5741/events
|
https://github.com/huggingface/datasets/pull/5741
| 1,665,860,919
|
PR_kwDODunzps5OM9nZ
| 5,741
|
Fix CI warnings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007448 / 0.011353 (-0.003905) | 0.005182 / 0.011008 (-0.005826) | 0.098718 / 0.038508 (0.060210) | 0.034594 / 0.023109 (0.011485) | 0.317301 / 0.275898 (0.041403) | 0.357800 / 0.323480 (0.034320) | 0.005860 / 0.007986 (-0.002126) | 0.004267 / 0.004328 (-0.000061) | 0.074876 / 0.004250 (0.070626) | 0.048002 / 0.037052 (0.010950) | 0.333360 / 0.258489 (0.074871) | 0.362080 / 0.293841 (0.068239) | 0.035957 / 0.128546 (-0.092589) | 0.012245 / 0.075646 (-0.063401) | 0.332970 / 0.419271 (-0.086301) | 0.050825 / 0.043533 (0.007293) | 0.313936 / 0.255139 (0.058797) | 0.340684 / 0.283200 (0.057485) | 0.106630 / 0.141683 (-0.035053) | 1.427898 / 1.452155 (-0.024257) | 1.547518 / 1.492716 (0.054801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296952 / 0.018006 (0.278945) | 0.515708 / 0.000490 (0.515218) | 0.004225 / 0.000200 (0.004025) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029365 / 0.037411 (-0.008046) | 0.111142 / 0.014526 (0.096616) | 0.124414 / 0.176557 (-0.052142) | 0.185227 / 0.737135 (-0.551908) | 0.129545 / 0.296338 (-0.166793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403303 / 0.215209 (0.188094) | 4.044138 / 2.077655 (1.966483) | 1.803622 / 1.504120 (0.299502) | 1.615436 / 1.541195 (0.074242) | 1.703576 / 1.468490 (0.235086) | 0.706398 / 4.584777 (-3.878379) | 3.912995 / 3.745712 (0.167283) | 4.004575 / 5.269862 (-1.265287) | 2.101592 / 4.565676 (-2.464085) | 0.087280 / 0.424275 (-0.336995) | 0.012564 / 0.007607 (0.004957) | 0.508484 / 0.226044 (0.282440) | 5.089351 / 2.268929 (2.820422) | 2.269022 / 55.444624 (-53.175602) | 1.933375 / 6.876477 (-4.943102) | 2.136783 / 2.142072 (-0.005289) | 0.862624 / 4.805227 (-3.942603) | 0.172107 / 6.500664 (-6.328557) | 0.066694 / 0.075469 (-0.008775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172513 / 1.841788 (-0.669275) | 15.877519 / 8.074308 (7.803211) | 14.687476 / 10.191392 (4.496084) | 0.189392 / 0.680424 (-0.491032) | 0.017334 / 0.534201 (-0.516866) | 0.420201 / 0.579283 (-0.159082) | 0.418502 / 0.434364 (-0.015862) | 0.489130 / 0.540337 (-0.051207) | 0.580678 / 1.386936 (-0.806258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007942 / 0.011353 (-0.003411) | 0.005312 / 0.011008 (-0.005696) | 0.074684 / 0.038508 (0.036176) | 0.035952 / 0.023109 (0.012843) | 0.349672 / 0.275898 (0.073774) | 0.377157 / 0.323480 (0.053678) | 0.006399 / 0.007986 (-0.001586) | 0.005769 / 0.004328 (0.001441) | 0.074283 / 0.004250 (0.070032) | 0.053217 / 0.037052 (0.016165) | 0.342545 / 0.258489 (0.084056) | 0.383663 / 0.293841 (0.089822) | 0.037234 / 0.128546 (-0.091312) | 0.012349 / 0.075646 (-0.063298) | 0.086522 / 0.419271 (-0.332749) | 0.049888 / 0.043533 (0.006355) | 0.337686 / 0.255139 (0.082547) | 0.361564 / 0.283200 (0.078365) | 0.104902 / 0.141683 (-0.036781) | 1.478259 / 1.452155 (0.026104) | 1.576376 / 1.492716 (0.083660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.339760 / 0.018006 (0.321753) | 0.530946 / 0.000490 (0.530456) | 0.000474 / 0.000200 (0.000274) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029685 / 0.037411 (-0.007726) | 0.109409 / 0.014526 (0.094883) | 0.125579 / 0.176557 (-0.050978) | 0.175378 / 0.737135 (-0.561757) | 0.130672 / 0.296338 (-0.165667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428456 / 0.215209 (0.213247) | 4.238731 / 2.077655 (2.161077) | 2.046703 / 1.504120 (0.542583) | 1.850701 / 1.541195 (0.309506) | 1.909290 / 1.468490 (0.440800) | 0.714314 / 4.584777 (-3.870463) | 3.816056 / 3.745712 (0.070344) | 2.118567 / 5.269862 (-3.151295) | 1.348017 / 4.565676 (-3.217659) | 0.087140 / 0.424275 (-0.337135) | 0.012546 / 0.007607 (0.004938) | 0.538041 / 0.226044 (0.311997) | 5.381822 / 2.268929 (3.112893) | 2.525685 / 55.444624 (-52.918939) | 2.178659 / 6.876477 (-4.697817) | 2.381054 / 2.142072 (0.238981) | 0.844404 / 4.805227 (-3.960823) | 0.171802 / 6.500664 (-6.328862) | 0.065630 / 0.075469 (-0.009839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262187 / 1.841788 (-0.579600) | 16.197668 / 8.074308 (8.123360) | 15.148636 / 10.191392 (4.957244) | 0.152601 / 0.680424 (-0.527823) | 0.020238 / 0.534201 (-0.513963) | 0.420141 / 0.579283 (-0.159142) | 0.416295 / 0.434364 (-0.018068) | 0.487051 / 0.540337 (-0.053286) | 0.581942 / 1.386936 (-0.804994) |\n\n</details>\n</details>\n\n\n"
] | 2023-04-13T07:17:02Z
| 2023-04-13T09:48:10Z
| 2023-04-13T09:40:50Z
|
MEMBER
| null | null | null |
Fix warnings in our CI tests.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5741/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5741/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5741",
"merged_at": "2023-04-13T09:40:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5741"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6272
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6272/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6272/events
|
https://github.com/huggingface/datasets/issues/6272
| 1,920,831,487
|
I_kwDODunzps5yfY__
| 6,272
|
Duplicate `data_files` when named `<split>/<split>.parquet`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
| null |
[] | null |
[
"Also reported in https://github.com/huggingface/datasets/issues/6259",
"I think it's best to drop duplicates with a `set` (as a temporary fix) and improve the patterns when/if https://github.com/fsspec/filesystem_spec/pull/1382 gets merged. @lhoestq Do you have some other ideas?",
"Alternatively we could just use this no ?\r\n\r\n```python\r\nif config.FSSPEC_VERSION < version.parse(\"2023.9.0\"):\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**[{sep}]{keyword}[{sep}/]**\",\r\n \"**/{keyword}[{sep}/]**\",\r\n ]\r\nelse:\r\n KEYWORDS_IN_PATH_NAME_BASE_PATTERNS = [\r\n \"{keyword}[{sep}/]**\",\r\n \"**/*[{sep}]{keyword}[{sep}/]**\",\r\n \"**/*/{keyword}[{sep}/]**\",\r\n ]\r\n```\r\n\r\nThis way no need to implement sets, which would require a bit of work since we've always considered a list of pattern to be resolved as the concatenated list of resolved files for each pattern (including duplicates)\r\n",
"Arf `\"**/*/{keyword}[{sep}/]**\"` does return `data/keyword.txt` in latest `fsspec` but not in `glob.glob`\r\n\r\nEDIT: actually forgot to set `recursive=True`",
"Actually `glob.glob` does return it with `recursive=True` ! my bad",
"Pff just tested and my idea sucks, pattern 1 and 3 obviously give duplicates ",
"> I think it's best to drop duplicates with a set (as a temporary fix)\r\n\r\nI started https://github.com/huggingface/datasets/pull/6278 to use DataFilesSet objects instead of DataFilesList"
] | 2023-10-01T15:43:56Z
| 2024-03-15T15:22:05Z
| 2024-03-15T15:22:05Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
e.g. with `u23429/stock_1_minute_ticker`
```ipython
In [1]: from datasets import *
In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker")
Downloading readme: 100%|ββββββββββββββββββββββββββ| 627/627 [00:00<00:00, 246kB/s]
In [3]: b.config.data_files
Out[3]:
{NamedSplit('train'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet'],
NamedSplit('validation'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet'],
NamedSplit('test'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet']}
```
This bug issue is present in the current `datasets` 2.14.5 and also on `main` even after https://github.com/huggingface/datasets/pull/6244 cc @mariosasko
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6272/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5539
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5539/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5539/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5539/events
|
https://github.com/huggingface/datasets/issues/5539
| 1,587,970,083
|
I_kwDODunzps5epoAj
| 5,539
|
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/41912135?v=4",
"events_url": "https://api.github.com/users/aalbersk/events{/privacy}",
"followers_url": "https://api.github.com/users/aalbersk/followers",
"following_url": "https://api.github.com/users/aalbersk/following{/other_user}",
"gists_url": "https://api.github.com/users/aalbersk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aalbersk",
"id": 41912135,
"login": "aalbersk",
"node_id": "MDQ6VXNlcjQxOTEyMTM1",
"organizations_url": "https://api.github.com/users/aalbersk/orgs",
"received_events_url": "https://api.github.com/users/aalbersk/received_events",
"repos_url": "https://api.github.com/users/aalbersk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aalbersk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aalbersk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aalbersk",
"user_view_type": "public"
}
|
[
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n```python\r\nfrom datasets import load_dataset\r\nimport torch\r\n\r\ndataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\ndef t(batch):\r\n return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n \r\ndataset.set_transform(t)\r\nd_0 = dataset[0]\r\n```\r\n\r\nStill, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.",
"I can take this",
"Fixed in #5553 ",
"> Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> import torch\r\n> \r\n> dataset = load_dataset(\"lambdalabs/pokemon-blip-captions\", split='train')\r\n> def t(batch):\r\n> return {\"test\": torch.tensor([1] * len(batch[next(iter(batch))]))}\r\n> \r\n> dataset.set_transform(t)\r\n> d_0 = dataset[0]\r\n> ```\r\n> \r\n> Still, the formatter's error message should mention that a dict of **sequences** is expected as the returned value (not just a dict) to make debugging easier.\r\n\r\nok, will change it according to suggestion. Thanks for the reply!"
] | 2023-02-16T16:08:51Z
| 2023-02-22T10:30:30Z
| 2023-02-21T13:03:57Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in _unnest
return {key: array[0] for key, array in py_dict.items()}
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 137, in <dictcomp>
return {key: array[0] for key, array in py_dict.items()}
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
```
### Steps to reproduce the bug
Load whichever dataset and add transform method to add 0-dim tensor. Or create/find a dataset containing 0-dim tensor. E.g.
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(batch):
return {"test": torch.tensor(1)}
dataset.set_transform(t)
d_0 = dataset[0]
```
### Expected behavior
Extractor will correctly get a row from the dataset, even if it contains 0-dim tensor.
### Environment info
`datasets==2.8.0`, but it looks like it is also applicable to main branch version (as of 16th February)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5539/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5539/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5887
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5887/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5887/events
|
https://github.com/huggingface/datasets/issues/5887
| 1,722,166,382
|
I_kwDODunzps5mpixu
| 5,887
|
HuggingsFace dataset example give error
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4",
"events_url": "https://api.github.com/users/donhuvy/events{/privacy}",
"followers_url": "https://api.github.com/users/donhuvy/followers",
"following_url": "https://api.github.com/users/donhuvy/following{/other_user}",
"gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/donhuvy",
"id": 1328316,
"login": "donhuvy",
"node_id": "MDQ6VXNlcjEzMjgzMTY=",
"organizations_url": "https://api.github.com/users/donhuvy/orgs",
"received_events_url": "https://api.github.com/users/donhuvy/received_events",
"repos_url": "https://api.github.com/users/donhuvy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/donhuvy",
"user_view_type": "public"
}
|
[] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt",
"user_view_type": "public"
}
] | null |
[
"Nice catch @donhuvy, that's because some models don't need the `token_type_ids`, as in this case, as the example is using `distilbert-base-cased`, and according to the DistilBert documentation at https://huggingface.co/transformers/v3.0.2/model_doc/distilbert.html, `DistilBert doesnβt have token_type_ids, you donβt need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP])`. `token_type_ids` are neither required in some other well known models such as RoBERTa. \r\n\r\nHere the issue comes due to a mismatch between the tokenizer and the model, as the Colab is using a BERT tokenizer (`bert-base-cased`), while the model is a DistilBERT (`distilbert-base-cased`), so aligning the tokenizer and the model solves it!",
"#self-assign",
"@donhuvy I've created https://github.com/huggingface/datasets/pull/5902 to solve it! π€",
"This has been addressed in #5902.\r\n\r\nThe Quicktour notebook is deprecated now - please use the notebook version of the [Quickstart doc page](https://huggingface.co/docs/datasets/main/en/quickstart) instead (\"Open in Colab\" button)."
] | 2023-05-23T14:09:05Z
| 2023-07-25T14:01:01Z
| 2023-07-25T14:01:00Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug


### Steps to reproduce the bug
Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz
```python
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 5:
break
```
Error
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>()
5 for i, batch in enumerate(dataloader):
6 batch.to(device)
----> 7 outputs = model(**batch)
8 loss = outputs.loss
9 loss.backward()
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'
```
https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5887/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5696
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5696/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5696/events
|
https://github.com/huggingface/datasets/issues/5696
| 1,651,707,008
|
I_kwDODunzps5icwyA
| 5,696
|
Shuffle a sharded iterable dataset without seed can lead to duplicate data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null |
[] | 2023-04-03T09:40:03Z
| 2023-04-04T14:58:18Z
| 2023-04-04T14:58:18Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one.
This can happen only when you have a number of shards that is a factor of the number of nodes.
The current workaround is to always set a `seed` in `.shuffle()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5696/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6611
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6611/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6611/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6611/events
|
https://github.com/huggingface/datasets/issues/6611
| 2,096,004,858
|
I_kwDODunzps587n76
| 6,611
|
`load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/15320635?v=4",
"events_url": "https://api.github.com/users/zotroneneis/events{/privacy}",
"followers_url": "https://api.github.com/users/zotroneneis/followers",
"following_url": "https://api.github.com/users/zotroneneis/following{/other_user}",
"gists_url": "https://api.github.com/users/zotroneneis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zotroneneis",
"id": 15320635,
"login": "zotroneneis",
"node_id": "MDQ6VXNlcjE1MzIwNjM1",
"organizations_url": "https://api.github.com/users/zotroneneis/orgs",
"received_events_url": "https://api.github.com/users/zotroneneis/received_events",
"repos_url": "https://api.github.com/users/zotroneneis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zotroneneis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zotroneneis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zotroneneis",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2024-01-23T12:37:57Z
| 2024-01-23T12:37:57Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When loading a large dataset (>1000GB) from S3 I run into the following error:
```
Traceback (most recent call last):
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
return await func(*args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/aiobotocore/client.py", line 383, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the GetObject operation: The difference between the request time and the current time is too large.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/alp/phoneme-classification.monorepo/aws_sagemaker/data_processing/inspect_final_dataset.py", line 13, in <module>
dataset = load_from_disk("s3://speech-recognition-processed-data/whisper/de/train_data/", storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/load.py", line 1902, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/alp/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1686, in load_from_disk
fs.download(src_dataset_path, [dest_dataset_path.as](http://dest_dataset_path.as/)_posix(), recursive=True)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1480, in download
return self.get(rpath, lpath, recursive=recursive, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 121, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 106, in sync
raise return_result
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 61, in _runner
result[0] = await coro
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 604, in _get
return await _run_coros_in_chunks(
File "/home/alp/.local/lib/python3.10/site-packages/fsspec/asyn.py", line 257, in _run_coros_in_chunks
await asyncio.gather(*chunk, return_exceptions=return_exceptions),
File "/usr/lib/python3.10/asyncio/tasks.py", line 408, in wait_for
return await fut
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1193, in _get_file
body, content_length = await _open_file(range=0)
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 1184, in _open_file
resp = await self._call_s3(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 348, in _call_s3
return await _error_wrapper(
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 140, in _error_wrapper
raise err
PermissionError: The difference between the request time and the current time is too large.
```
The usual problem for this error is that the time on my local machine is out of sync with the current time. However, this is not the case here. I checked the time and even reset it with no success. See resources here:
- https://stackoverflow.com/questions/4770635/s3-error-the-difference-between-the-request-time-and-the-current-time-is-too-la
- https://stackoverflow.com/questions/25964491/aws-s3-upload-fails-requesttimetooskewed
The error does not appear when loading a smaller dataset (e.g. our test set) from the same s3 path.
### Steps to reproduce the bug
1. Create large dataset
2. Try loading it from s3 using:
```
dataset = load_from_disk("s3://...", storage_options=storage_options)
```
### Expected behavior
Load dataset without running into this error.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.19.3
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6611/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6611/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6240
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6240/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6240/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6240/events
|
https://github.com/huggingface/datasets/issues/6240
| 1,895,723,888
|
I_kwDODunzps5w_nNw
| 6,240
|
Dataloader stuck on multiple GPUs
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4",
"events_url": "https://api.github.com/users/kuri54/events{/privacy}",
"followers_url": "https://api.github.com/users/kuri54/followers",
"following_url": "https://api.github.com/users/kuri54/following{/other_user}",
"gists_url": "https://api.github.com/users/kuri54/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kuri54",
"id": 40049003,
"login": "kuri54",
"node_id": "MDQ6VXNlcjQwMDQ5MDAz",
"organizations_url": "https://api.github.com/users/kuri54/orgs",
"received_events_url": "https://api.github.com/users/kuri54/received_events",
"repos_url": "https://api.github.com/users/kuri54/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kuri54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuri54/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kuri54",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"What type of dataset are you using in this script? `torch.utils.data.Dataset` or `datasets.Dataset`? Please share the `datasets` package version if it's the latter. Otherwise, it's better to move this issue to the `accelerate` repo.",
"Very sorry, I thought I had a repo in `accelerate!`\r\nI will close this issue and repo the issue in the appropriate place."
] | 2023-09-14T05:30:30Z
| 2023-09-14T23:54:42Z
| 2023-09-14T23:54:42Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am trying to get CLIP to fine-tuning with my code.
When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon.
- Validation dataloader stuck in 2nd epoch only on multi-GPU
Specifically, when the "for inputs in valid_loader:" process is finished, it does not proceed to the next step. train_loader process is completed. Also, both train and valid are working correctly in the first epoch.
The accelerate command at that time is as follows.
`accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...`
- This will not happen when single GPU is used.
`CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...`
- Setting num_workers=0 in dataloader did not change the result.
### Steps to reproduce the bug
1. The codes for fine-tuning the regular CLIP were updated for accelerate.
2. Run the code with the accelerate command as `accelerate launch --multi_gpu --num_processes=2 {script_name.py} {--arg1} {--arg2} ...` and the above problem will occur.
3. CUDA_VISIBLE_DEVICES="0" accelerate launch {script_name.py} --arg1 --arg2 ...` , it works fine.
### Expected behavior
It Should end normally as if it was run on a single GPU.
### Environment info
Since `datasets-cli env` did not work, the environment is described below.
- OS: Ubuntu 22.04 with Docker
- Docker: 24.0.5, build ced0996
- Python: 3.10.12
- torch==2.0.1
- accelerate==0.21.0
- transformers==4.33.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4",
"events_url": "https://api.github.com/users/kuri54/events{/privacy}",
"followers_url": "https://api.github.com/users/kuri54/followers",
"following_url": "https://api.github.com/users/kuri54/following{/other_user}",
"gists_url": "https://api.github.com/users/kuri54/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kuri54",
"id": 40049003,
"login": "kuri54",
"node_id": "MDQ6VXNlcjQwMDQ5MDAz",
"organizations_url": "https://api.github.com/users/kuri54/orgs",
"received_events_url": "https://api.github.com/users/kuri54/received_events",
"repos_url": "https://api.github.com/users/kuri54/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kuri54/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kuri54/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kuri54",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6240/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6240/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/6477
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6477/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6477/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6477/events
|
https://github.com/huggingface/datasets/pull/6477
| 2,028,022,374
|
PR_kwDODunzps5hRq_N
| 6,477
|
Fix PermissionError on Windows CI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6477). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005383 / 0.011353 (-0.005969) | 0.003644 / 0.011008 (-0.007364) | 0.063375 / 0.038508 (0.024866) | 0.055567 / 0.023109 (0.032457) | 0.261376 / 0.275898 (-0.014522) | 0.283731 / 0.323480 (-0.039749) | 0.004022 / 0.007986 (-0.003964) | 0.002780 / 0.004328 (-0.001549) | 0.049407 / 0.004250 (0.045156) | 0.038208 / 0.037052 (0.001156) | 0.256275 / 0.258489 (-0.002214) | 0.293203 / 0.293841 (-0.000638) | 0.028411 / 0.128546 (-0.100135) | 0.010753 / 0.075646 (-0.064894) | 0.210420 / 0.419271 (-0.208851) | 0.036062 / 0.043533 (-0.007471) | 0.260455 / 0.255139 (0.005317) | 0.294991 / 0.283200 (0.011791) | 0.019020 / 0.141683 (-0.122662) | 1.118334 / 1.452155 (-0.333821) | 1.227391 / 1.492716 (-0.265325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094700 / 0.018006 (0.076694) | 0.302378 / 0.000490 (0.301888) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018745 / 0.037411 (-0.018667) | 0.061103 / 0.014526 (0.046578) | 0.075369 / 0.176557 (-0.101188) | 0.121573 / 0.737135 (-0.615563) | 0.076898 / 0.296338 (-0.219440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.774298 / 2.077655 (0.696644) | 1.483557 / 1.504120 (-0.020563) | 1.365091 / 1.541195 (-0.176104) | 1.390170 / 1.468490 (-0.078320) | 0.561179 / 4.584777 (-4.023598) | 2.401654 / 3.745712 (-1.344058) | 2.782628 / 5.269862 (-2.487233) | 1.731497 / 4.565676 (-2.834179) | 0.061798 / 0.424275 (-0.362477) | 0.004998 / 0.007607 (-0.002609) | 0.336920 / 0.226044 (0.110875) | 3.371891 / 2.268929 (1.102963) | 1.832173 / 55.444624 (-53.612452) | 1.573515 / 6.876477 (-5.302962) | 1.595609 / 2.142072 (-0.546463) | 0.647652 / 4.805227 (-4.157575) | 0.118501 / 6.500664 (-6.382164) | 0.042521 / 0.075469 (-0.032948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939310 / 1.841788 (-0.902478) | 11.459855 / 8.074308 (3.385547) | 10.677954 / 10.191392 (0.486562) | 0.141029 / 0.680424 (-0.539395) | 0.014321 / 0.534201 (-0.519880) | 0.306679 / 0.579283 (-0.272604) | 0.262303 / 0.434364 (-0.172061) | 0.327422 / 0.540337 (-0.212915) | 0.436159 / 1.386936 (-0.950777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003646 / 0.011008 (-0.007362) | 0.049272 / 0.038508 (0.010764) | 0.075367 / 0.023109 (0.052257) | 0.275959 / 0.275898 (0.000061) | 0.296317 / 0.323480 (-0.027163) | 0.004129 / 0.007986 (-0.003857) | 0.002731 / 0.004328 (-0.001597) | 0.048475 / 0.004250 (0.044225) | 0.041571 / 0.037052 (0.004518) | 0.277993 / 0.258489 (0.019504) | 0.298709 / 0.293841 (0.004868) | 0.033117 / 0.128546 (-0.095429) | 0.010914 / 0.075646 (-0.064732) | 0.057599 / 0.419271 (-0.361673) | 0.033354 / 0.043533 (-0.010179) | 0.275669 / 0.255139 (0.020530) | 0.288451 / 0.283200 (0.005251) | 0.019953 / 0.141683 (-0.121729) | 1.148608 / 1.452155 (-0.303547) | 1.184818 / 1.492716 (-0.307898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099566 / 0.018006 (0.081560) | 0.344935 / 0.000490 (0.344445) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021925 / 0.037411 (-0.015486) | 0.068623 / 0.014526 (0.054097) | 0.081533 / 0.176557 (-0.095024) | 0.120996 / 0.737135 (-0.616139) | 0.082495 / 0.296338 (-0.213844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294990 / 0.215209 (0.079781) | 2.892344 / 2.077655 (0.814690) | 1.611090 / 1.504120 (0.106970) | 1.496072 / 1.541195 (-0.045123) | 1.486069 / 1.468490 (0.017579) | 0.569769 / 4.584777 (-4.015008) | 2.477623 / 3.745712 (-1.268089) | 2.819576 / 5.269862 (-2.450286) | 1.745717 / 4.565676 (-2.819959) | 0.063763 / 0.424275 (-0.360512) | 0.004970 / 0.007607 (-0.002637) | 0.344879 / 0.226044 (0.118834) | 3.452795 / 2.268929 (1.183867) | 1.964468 / 55.444624 (-53.480156) | 1.674526 / 6.876477 (-5.201951) | 1.679716 / 2.142072 (-0.462356) | 0.650005 / 4.805227 (-4.155222) | 0.117019 / 6.500664 (-6.383646) | 0.048297 / 0.075469 (-0.027172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965422 / 1.841788 (-0.876366) | 11.989414 / 8.074308 (3.915106) | 10.938462 / 10.191392 (0.747070) | 0.140089 / 0.680424 (-0.540334) | 0.015533 / 0.534201 (-0.518668) | 0.292188 / 0.579283 (-0.287095) | 0.277903 / 0.434364 (-0.156461) | 0.326164 / 0.540337 (-0.214173) | 0.565674 / 1.386936 (-0.821262) |\n\n</details>\n</details>\n\n\n"
] | 2023-12-06T08:34:53Z
| 2023-12-06T09:24:11Z
| 2023-12-06T09:17:52Z
|
MEMBER
| null | null | null |
Fix #6476.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6477/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6477/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6477.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6477",
"merged_at": "2023-12-06T09:17:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6477.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6477"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4901
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4901/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4901/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4901/events
|
https://github.com/huggingface/datasets/pull/4901
| 1,352,438,915
|
PR_kwDODunzps494FNX
| 4,901
|
Raise ManualDownloadError from get_dataset_config_info
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-26T15:45:56Z
| 2022-08-30T10:42:21Z
| 2022-08-30T10:40:04Z
|
MEMBER
| null | null | null |
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.
Related to:
- #4898
CC: @severo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4901/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4901/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4901.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4901",
"merged_at": "2022-08-30T10:40:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4901.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4901"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5107
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5107/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5107/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5107/events
|
https://github.com/huggingface/datasets/pull/5107
| 1,406,736,710
|
PR_kwDODunzps5ArjCZ
| 5,107
|
Multiprocessed dataset builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"I would also like to add a test, but am not sure whether it should go into `test_builder` (more natural imo) or `test_load` (which already contains a lot of the things I have to import to run my current testing setup). For reference, what I run to test that it works looks like:\r\n\r\n```\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\n\r\nimport datasets\r\nfrom datasets.builder import DatasetBuilder\r\nfrom datasets.features import Features, Value\r\n\r\nDATASET_LOADING_SCRIPT_NAME = \"__dummy_dataset1__\"\r\n\r\nDATASET_LOADING_SCRIPT_CODE = \"\"\"\r\nimport os\r\n\r\nimport datasets\r\nfrom datasets import DatasetInfo, Features, Split, SplitGenerator, Value\r\n\r\n\r\nclass __DummyDataset1__(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self) -> DatasetInfo:\r\n return DatasetInfo(features=Features({\"text\": Value(\"string\")}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [\r\n SplitGenerator(Split.TRAIN, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"train1.txt\"), os.path.join(dl_manager.manual_dir, \"train2.txt\")]}),\r\n SplitGenerator(Split.TEST, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"test.txt\")]}),\r\n ]\r\n\r\n def _generate_examples(self, filepaths, **kwargs):\r\n idx = 0\r\n for filepath in filepaths:\r\n with open(filepath, \"r\", encoding=\"utf-8\") as f:\r\n for line in f:\r\n yield idx, {\"text\": line.strip()}\r\n idx += 1\r\n\"\"\"\r\n\r\n\r\ndef dataset_loading_script_dir(tmp_path):\r\n script_name = DATASET_LOADING_SCRIPT_NAME\r\n script_dir = tmp_path / script_name\r\n script_dir.mkdir()\r\n script_path = script_dir / f\"{script_name}.py\"\r\n with open(script_path, \"w\") as f:\r\n f.write(DATASET_LOADING_SCRIPT_CODE)\r\n return str(script_dir)\r\n\r\n\r\ndef data_dir(tmp_path):\r\n data_dir = tmp_path / \"data_dir\"\r\n data_dir.mkdir()\r\n with open(data_dir / \"train1.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"train2.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"test.txt\", \"w\") as f:\r\n f.write(\"bar\\n\" * 10)\r\n return str(data_dir)\r\n\r\n\r\ndef load_dataset_builder_multiprocessed(tmp_path):\r\n builder = datasets.load_dataset_builder(\r\n os.path.join(dataset_loading_script_dir(tmp_path), DATASET_LOADING_SCRIPT_NAME + \".py\"),\r\n data_dir=data_dir(tmp_path),\r\n )\r\n assert isinstance(builder, DatasetBuilder)\r\n assert builder.name == DATASET_LOADING_SCRIPT_NAME\r\n assert builder.info.features == Features({\"text\": Value(\"string\")})\r\n builder.download_and_prepare(tmp_path / \"prepare_target\", max_shard_size=500, num_proc=2)\r\n\r\nif __name__ == \"__main__\":\r\n tmp_path = \"tmp\"\r\n if os.path.exists(tmp_path):\r\n raise FileExistsError(f\"path {tmp_path} already exists\")\r\n os.makedirs(tmp_path)\r\n try:\r\n load_dataset_builder_multiprocessed(Path(tmp_path))\r\n finally:\r\n # pass\r\n shutil.rmtree(tmp_path)\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5107). All of your documentation changes will be reflected on that endpoint.",
"Nice ! I think the test can go in `test_builder.py` :)",
"I've added sharded arrow dataset loading. Two WIP items in the PR:\r\n- ~~Order is not conserved (it seems like the sharded files are read in the wrong order)~~\r\n- the tqdm for preparing the splits is wrong (it compares against the size of the whole split rather than against the size of the multiprocessing shard, but I am not sure how to access the latter)\r\n\r\nAlso `naming.filenames_for_dataset_split` is not very elegant imo.\r\n\r\n@lvwerra if you don't care about order, as I do, it's functional for now but I'd still quite like to get to the bottom of this.",
"Found the ordering bug ! (`glob.glob` returning stuff in arbitrary order)",
"I fixed the tqdm to be less misleading, but it can't tell where to stop. I am a bit hesitant to add a top-level tqdm (on the shard iterator) since for most intents it will do 0 -> N shards straight, but I am not sure what is the best way to present that info here.",
"I'm continuing the PR :)",
"Did a few changes:\r\n- make shards naming consistent:\r\n - use `{builder_name}-{split_name}.{file_format}` when there's only 1 shard\r\n - otherwise use `{builder_name}-{split_name}-{shard_idx:05d}-of-{num_shards:05d}.{file_format}`\r\n- update the reader to support reading several shards\r\n - added a new `shard_lengths` field in `SplitInfo` (FYI it is saved in `dataset_info.json` next to the shards as usual)\r\n - it's None when there's only 1 shard\r\n - otherwise it's a list of integers that correspond to the number of rows per shard\r\n - implemented partial reading to only memory map the required shards\r\n - e.g. when someone asks for a partial split like `train[:10%]`\r\n- align the sharding for beam datasets\r\n - no more combining into 1 big arrow file\r\n- added a tqdm bar\r\n - only one single bar, handled by the main process\r\n - gathers progress updates from other processes using `iflatmap_unordered`\r\n - shows the number of examples (even for datasets prepared by generating arrow tables)\r\n- disabled multiprocessing by default - users must pass `num_proc` explicitly\r\n- tests\r\n- docs",
"Alright this is ready for review - sorry it ended up so big ^^'\r\n\r\nIf I can do anything to make it easier for your to review this PR @mariosasko let me know",
"Multiprocessing is disabled by default but we may show a warning to encourage users to pass `num_proc` if the dataset is split in many files. Let me know what you think",
"Hey, is this error seems to you guys natural? \r\n\r\nThe package built from `0d4e3907` commit tag, and here is the version displayed from the import ... \r\n```bash\r\n>>> datasets.__version__\r\n'2.6.1.dev0'\r\n>>> \r\n```\r\n\r\n```bash\r\n>>> data = load_dataset('dataset_loaders/rfw2latentplay', num_proc=14)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1523, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 1292, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 303, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 456, in _create_builder_config\r\n builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'num_proc'\r\n```\r\n\r\nLet me know if I can help fixing this ... \r\n",
"> Do we have some benchmarks to see the speed-up?\r\n\r\nOn my machine running `load_dataset(\"oscar-corpus/OSCAR-2201\", \"br\")` (which is split in shards) I go from 2-3k examples per sec to 4-5k examples per sec with num_proc=2 π",
"> Hey, is this error seems to you guys natural?\r\n>\r\n> The package built from 0d4e3907 commit tag, and here is the version displayed from the import ...\r\n\r\nI don't know where you got the `0d4e3907` commit tag from, it doesn't seem to be in this PR. You should try installing from this PR, or wait for it to be merged on `main`",
"## Splits vs Shards\r\n\r\nMaybe it's a good idea to add some documentation on the `sharding` that can be achieved by passing `list` based arguments to the `SplitGenerator`s `gen_kwargs` ... \r\n\r\nI had to read the whole dataset generation source code to find this out ... \r\n\r\n\r\n",
"> Maybe it's a good idea to add some documentation on the sharding that can be achieved by passing list based arguments to the SplitGenerators gen_kwargs ...\r\n\r\nThis is part of this PR :) you can check the changes in docs/source/dataset_script.mdx",
"I took your comments into account @mariosasko thanks !\r\nLet me know if it's good for you now ;)",
"The doc CI should be fixed by now hopefully, merging !"
] | 2022-10-12T19:59:17Z
| 2022-12-01T15:37:09Z
| 2022-11-09T17:11:43Z
|
CONTRIBUTOR
| null | null | null |
This PR adds the multiprocessing part of #2650 (but not the caching of already-computed arrow files). On the other side, loading of sharded arrow files still needs to be implemented (sharded parquet files can already be loaded).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5107/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5107/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5107",
"merged_at": "2022-11-09T17:11:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5107"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7313
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7313/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7313/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7313/events
|
https://github.com/huggingface/datasets/issues/7313
| 2,726,240,634
|
I_kwDODunzps6ifyF6
| 7,313
|
Cannot create a dataset with relative audio path
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4",
"events_url": "https://api.github.com/users/sedol1339/events{/privacy}",
"followers_url": "https://api.github.com/users/sedol1339/followers",
"following_url": "https://api.github.com/users/sedol1339/following{/other_user}",
"gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sedol1339",
"id": 5188731,
"login": "sedol1339",
"node_id": "MDQ6VXNlcjUxODg3MzE=",
"organizations_url": "https://api.github.com/users/sedol1339/orgs",
"received_events_url": "https://api.github.com/users/sedol1339/received_events",
"repos_url": "https://api.github.com/users/sedol1339/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sedol1339",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hello ! when you `cast_column` you need the paths to be absolute paths or relative paths to your working directory, not the original dataset directory.\r\n\r\nThough I'd recommend structuring your dataset as an AudioFolder which automatically links a metadata.jsonl or csv to the audio files via relative paths **within** the dataset repository: https://huggingface.co/docs/datasets/v3.2.0/en/audio_load#audiofolder",
"@lhoestq thank you, but there are two problems with using AudioFolder:\r\n1. It is said that AudioFolder requires metadata.csv. However, my datset is too large and contains nested and np.ndarray fields, so I can't use csv.\r\n2. It is said that I need to load the dataset with `load_dataset(\"audiofolder\", ...)`. However, if possible, I want my dataset to be loaded as usual with `load_dataset(dataset_name)` after I upload if to HF.",
"You can use metadata.jsonl if you have nested data :)\r\n\r\nAnd actually if you have a dataset structured as an AudioFolder then `load_dataset(dataset_name)` does work after uploading to HF",
"I have created an audio dataset. In my repo, I have explained the steps and structure. An example dataset is also available in the repo. https://github.com/pr0mila/ParquetToHuggingFace "
] | 2024-12-09T07:34:20Z
| 2025-04-19T07:13:08Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code).
### Steps to reproduce the bug
Creating a dataset
```
from pathlib import Path
from datasets import Dataset, load_dataset, Audio
Path('my_dataset/audio').mkdir(parents=True, exist_ok=True)
Path('my_dataset/audio/file.mp3').touch(exist_ok=True)
Dataset.from_list(
[{'audio': {'path': 'audio/file.mp3'}}]
).to_parquet('my_dataset/data.parquet')
```
Result:
```
# my_dataset
# βββ audio
# β βββ file.mp3
# βββ data.parquet
```
Trying to load the dataset
```
dataset = (
load_dataset('my_dataset', split='train')
.cast_column('audio', Audio(sampling_rate=16_000))
)
dataset[0]
>>> FileNotFoundError: [Errno 2] No such file or directory: 'audio/file.mp3'
```
### Expected behavior
I expect the dataset to load correctly.
I've found 2 workarounds, but they are not very good:
1. I can specify an absolute path to the audio, however, when I move the folder or upload to HF it will stop working.
2. I can set `'path': 'file.mp3'`, and load with `load_dataset('my_dataset', data_dir='audio')` - it seems to work, but does this mean that anyone from Hugging Face who wants to use this dataset should also pass the `data_dir` argument, otherwise it won't work?
### Environment info
datasets 3.1.0, Ubuntu 24.04.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7313/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7313/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/4747
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4747/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4747/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4747/events
|
https://github.com/huggingface/datasets/pull/4747
| 1,318,586,932
|
PR_kwDODunzps48IWKj
| 4,747
|
Shard parquet in `download_and_prepare`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This is ready for review cc @mariosasko :) please let me know what you think !"
] | 2022-07-26T18:05:01Z
| 2022-09-15T13:43:55Z
| 2022-09-15T13:41:26Z
|
MEMBER
| null | null | null |
Following https://github.com/huggingface/datasets/pull/4724 (needs to be merged first)
It's good practice to shard parquet files to enable parallelism with spark/dask/etc.
I added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).
```python
from datasets import *
output_dir = "./output_dir" # also supports "s3://..."
builder = load_dataset_builder("squad")
builder.download_and_prepare(output_dir, file_format="parquet", max_shard_size="5MB")
```
### Implementation details
The examples are written to a parquet file until `ParquetWriter._num_bytes > max_shard_size`. When this happens, a new writer is instantiated to start writing the next shard. At the end, all the shards are renamed to include the total number of shards in their names: `{builder.name}-{split}-{shard_id:05d}-of-{num_shards:05d}.parquet`
I also added the `MAX_SHARD_SIZE` config variable (default to 500MB)
TODO:
- [x] docstrings
- [x] docs
- [x] tests
cc @severo
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4747/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4747/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/4747.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4747",
"merged_at": "2022-09-15T13:41:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4747.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4747"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5700
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5700/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5700/events
|
https://github.com/huggingface/datasets/pull/5700
| 1,652,527,530
|
PR_kwDODunzps5Ng6g_
| 5,700
|
fix: fix wrong modification of the 'cache_file_name' -related parametβ¦
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}",
"followers_url": "https://api.github.com/users/FrancoisNoyez/followers",
"following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FrancoisNoyez",
"id": 47528215,
"login": "FrancoisNoyez",
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs",
"received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events",
"repos_url": "https://api.github.com/users/FrancoisNoyez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FrancoisNoyez",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`",
"@lhoestq \r\nRegarding what you suggest:\r\nThe thing is, if cached files already exist and do correspond to the split that we are currently trying to perform, then it would be a shame not to use them, would it not? So I don't think that we should necessarily bypass this step in the method (corresponding to the reading of already existing data), if 'keep_in_memory' = True. For me, 'keep_in_memory' = True is supposed to mean \"don't cache the output of this method\", but it should say nothing regarding what to do with potentially already existing cached data, should it?\r\nBesides, even if we do what you suggest, and do only that (so, not the modifs that I suggested), then, assuming that 'keep_in_memory' = False and that there exist cached files, if the following check on the existence of cached files with specific name fails, we will still have ended up modifying an input value which will be then used in the remaining of the method, potentially altering the behavior that the user intended the method's call to have. Basically, the issue with what you suggest is that we can't guaranty that we won't continue with the remaining of the method even if this condition is met. Because of that, in my opinion, the best way to not have to worry about potential, unwanted side effects in the rest of the code is to not modify those variables in place, and so, here, to use other variables.\r\nSo, I'm sorry, but for those two reasons, I don't think that what you are suggesting addresses the problems which are described in the opened issue.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5700). All of your documentation changes will be reflected on that endpoint.",
"Makes sense ! Therefore removing the ValueError messages sounds good to me, thanks for detailing.\r\n\r\nThen I think it's fine to keep using the same variables for the cache file names is enough instead of defining new ones - it doesn't alter the behavior of the function. Otherwise it would feel a bit confusing to have similar variables with slightly modified names just for that",
"Ok for the removing the ValueError exceptions, thanks.\r\n\r\nThat said, it seems to me like we should still find a way not to modify the values input by the user, insofar as they can be used elsewhere down the line in the program. Sure, here, by removing the raising of those ValueError exceptions, we have fixed one use cases were allowing this modification actually caused an issue, but maybe there are other use cases where this would also caused an issue? Also, maybe in the future we will add other functionalities which will depend on the values of those input parameters, with then new risks of such an issue occurring?\r\nThat's why, in order not to have to worry about that, and in order to make the code a bit more future -proof, I suggest that make sure those input values are not modified.\r\n\r\nOne way that I did this is to create different but similar looking variable names. If you find this confusing, we can always add a comment.\r\nAnother way would be to not store the result of the conditional definition of the values (the '\\_cache_file_name = (... if condition else ...)' in my proposition of code), and to use it every time we need. But since we use those new variables at least twice, that creates code redundancy, which is not great either.\r\nFinally, a third way that I can imagine would be to put all this logic into its own method, which would then encapsulate it, and protect the remaining of the 'train_test_split' code from all unintended side effect that this logic can currently cause. This one is probably best. Also, maybe it could be used to remove some code redundancy elsewhere in the definition of the Dataset class? I have not checked if such a code redundancy exists.",
"We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nNote that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though, but it should be easy to add in `_select_with_indices_mapping`:\r\n- add keep_in_memory in `_new_dataset_with_indices` that uses InMemoryTable.from_file\r\n- inside `_select_with_indices_mapping` return the dataset from `_new_dataset_with_indices` if:\r\n - `keep_in_memory=True`\r\n - and `indices_cache_file_name` is not None and exists \r\n - and `is_caching_enabled()`\r\n\r\nBecause if we let it this way it would recreate the cache file unfortunately",
"> We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nI think the fact that it's a style of the library is not really an argument in itself; however, after thinking through it several times, I think I know see why your solution is acceptable: as soon as the user specifies that 'keep_in_memory=True', they should not care anymore about the value of the '\\_indices_cache_file_name' variables, since from their point of view those are now irrelevant. So it's \"fine\" if we allow ourselves to modify the value of those variables, if it helps the internal code being more concise.\r\nStill, I find that it's a bit unintuitive, and a risk as far as future evolution of the method / of the code is concerned; someone tasked with doing that would need to have the knowledge of a lot of, if not all, the other methods of the class, in order to understand the potentially far-reaching impact of some modifications made to this portion of the code. But I guess that's a choice which is the library's owners to make. Also, if we use your proposed solution, as I explained, we can't get the benefit of potentially reusing possibly already existing cached data.\r\nOn that note...\r\n\r\n> Note that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though\r\n\r\nI'm not sure what you mean here:\r\nWithin the current code trying to load up the potentially already existing split data, there is no trace of the 'keep_in_memory' variable. So why do you say that 'the case where it would reload the cache even if keep_in_memory=True is not implemented' (I assume that you mean 'currently implemented')? Surely, currently, this bit of code works regardless of the value of the 'keep_in_memory' variable', does it not?"
] | 2023-04-03T18:05:26Z
| 2023-04-06T17:17:27Z
| null |
NONE
| null | null | null |
β¦ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699)
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5700/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5700",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5700"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4650
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4650/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4650/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4650/events
|
https://github.com/huggingface/datasets/issues/4650
| 1,296,680,037
|
I_kwDODunzps5NScRl
| 4,650
|
Add SPECTER dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel",
"user_view_type": "public"
}
|
[
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] |
open
| false
| null |
[] | null |
[
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/SPECTER)"
] | 2022-07-07T01:41:32Z
| 2022-07-14T02:07:49Z
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
## Adding a Dataset
- **Name:** *SPECTER*
- **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers*
- **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4650/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/4650/timeline
| null | null | null | null |
https://api.github.com/repos/huggingface/datasets/issues/5620
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5620/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5620/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5620/events
|
https://github.com/huggingface/datasets/pull/5620
| 1,613,460,520
|
PR_kwDODunzps5LefAf
| 5,620
|
Bump pyarrow to 8.0.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009873 / 0.011353 (-0.001480) | 0.005180 / 0.011008 (-0.005828) | 0.099587 / 0.038508 (0.061079) | 0.035674 / 0.023109 (0.012565) | 0.299156 / 0.275898 (0.023258) | 0.361253 / 0.323480 (0.037773) | 0.008159 / 0.007986 (0.000173) | 0.004245 / 0.004328 (-0.000084) | 0.076809 / 0.004250 (0.072559) | 0.045251 / 0.037052 (0.008199) | 0.306002 / 0.258489 (0.047513) | 0.345758 / 0.293841 (0.051917) | 0.037826 / 0.128546 (-0.090721) | 0.011887 / 0.075646 (-0.063759) | 0.333804 / 0.419271 (-0.085467) | 0.047859 / 0.043533 (0.004326) | 0.291866 / 0.255139 (0.036727) | 0.319356 / 0.283200 (0.036157) | 0.104241 / 0.141683 (-0.037442) | 1.443816 / 1.452155 (-0.008338) | 1.514654 / 1.492716 (0.021938) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009846 / 0.018006 (-0.008160) | 0.439488 / 0.000490 (0.438999) | 0.003227 / 0.000200 (0.003028) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027553 / 0.037411 (-0.009858) | 0.105337 / 0.014526 (0.090811) | 0.116203 / 0.176557 (-0.060354) | 0.161140 / 0.737135 (-0.575995) | 0.123002 / 0.296338 (-0.173336) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400102 / 0.215209 (0.184893) | 3.976748 / 2.077655 (1.899094) | 1.794763 / 1.504120 (0.290643) | 1.602477 / 1.541195 (0.061282) | 1.703689 / 1.468490 (0.235199) | 0.696751 / 4.584777 (-3.888026) | 3.713832 / 3.745712 (-0.031880) | 2.124536 / 5.269862 (-3.145326) | 1.313005 / 4.565676 (-3.252671) | 0.086130 / 0.424275 (-0.338146) | 0.012085 / 0.007607 (0.004477) | 0.512976 / 0.226044 (0.286932) | 5.135313 / 2.268929 (2.866384) | 2.318173 / 55.444624 (-53.126451) | 1.996360 / 6.876477 (-4.880117) | 2.060150 / 2.142072 (-0.081922) | 0.853534 / 4.805227 (-3.951693) | 0.165586 / 6.500664 (-6.335078) | 0.062365 / 0.075469 (-0.013104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178843 / 1.841788 (-0.662945) | 14.541639 / 8.074308 (6.467331) | 14.090782 / 10.191392 (3.899390) | 0.158717 / 0.680424 (-0.521707) | 0.028825 / 0.534201 (-0.505376) | 0.441427 / 0.579283 (-0.137856) | 0.439856 / 0.434364 (0.005492) | 0.530610 / 0.540337 (-0.009727) | 0.634044 / 1.386936 (-0.752892) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007502 / 0.011353 (-0.003851) | 0.005208 / 0.011008 (-0.005801) | 0.075020 / 0.038508 (0.036512) | 0.033297 / 0.023109 (0.010188) | 0.342218 / 0.275898 (0.066320) | 0.376716 / 0.323480 (0.053236) | 0.005906 / 0.007986 (-0.002080) | 0.005320 / 0.004328 (0.000992) | 0.073531 / 0.004250 (0.069281) | 0.049091 / 0.037052 (0.012039) | 0.344202 / 0.258489 (0.085713) | 0.380556 / 0.293841 (0.086715) | 0.037500 / 0.128546 (-0.091047) | 0.012404 / 0.075646 (-0.063242) | 0.087254 / 0.419271 (-0.332017) | 0.055145 / 0.043533 (0.011612) | 0.344112 / 0.255139 (0.088973) | 0.359052 / 0.283200 (0.075852) | 0.108337 / 0.141683 (-0.033345) | 1.450332 / 1.452155 (-0.001822) | 1.553607 / 1.492716 (0.060891) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216335 / 0.018006 (0.198329) | 0.436813 / 0.000490 (0.436323) | 0.005055 / 0.000200 (0.004855) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030037 / 0.037411 (-0.007374) | 0.110854 / 0.014526 (0.096329) | 0.121967 / 0.176557 (-0.054589) | 0.174029 / 0.737135 (-0.563107) | 0.128340 / 0.296338 (-0.167998) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424463 / 0.215209 (0.209254) | 4.201822 / 2.077655 (2.124167) | 2.043075 / 1.504120 (0.538956) | 1.851841 / 1.541195 (0.310647) | 1.947790 / 1.468490 (0.479300) | 0.684110 / 4.584777 (-3.900667) | 3.763536 / 3.745712 (0.017824) | 3.106988 / 5.269862 (-2.162873) | 1.498305 / 4.565676 (-3.067372) | 0.085079 / 0.424275 (-0.339196) | 0.012241 / 0.007607 (0.004634) | 0.520877 / 0.226044 (0.294832) | 5.181455 / 2.268929 (2.912527) | 2.443038 / 55.444624 (-53.001586) | 2.130823 / 6.876477 (-4.745654) | 2.217901 / 2.142072 (0.075829) | 0.837116 / 4.805227 (-3.968111) | 0.166581 / 6.500664 (-6.334083) | 0.065510 / 0.075469 (-0.009959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289317 / 1.841788 (-0.552471) | 15.122019 / 8.074308 (7.047710) | 13.919670 / 10.191392 (3.728278) | 0.150047 / 0.680424 (-0.530377) | 0.017612 / 0.534201 (-0.516589) | 0.426239 / 0.579283 (-0.153044) | 0.425686 / 0.434364 (-0.008678) | 0.521436 / 0.540337 (-0.018901) | 0.618217 / 1.386936 (-0.768719) |\n\n</details>\n</details>\n\n\n",
"We haven't updated the minimal version requirement for PyArrow in a while, so it's ok to make a bigger leap IMO, e.g., PyArrow 8.0 (Colab installs 9.0). With this change, we should also remove the PyArrow version check in `folder_based_builder.py`, and the ones in `table.py`/`arrow_dataset.py` regarding the `to_reader` API if we decide to bump PyArrow to version 8.0.",
"I think it's a good opportunity to bump the version to 8.0 which offers higher performance anyway, I wouldn't bother trying to support 6.0.1 anymore. Only 1% of users based on 6.0.1 use the latest `datasets` version 2.10.1\r\n\r\nBumping to 8.0 if it sounds good to you",
"Sure, it is OK for those other reasons. I would just not stress that the increase of the minimum version is to support pandas 2.0 though...",
"If requiring min 8.0, do you know the percentage of people using 7.0 and latest datasets version?",
"Around 10% of users have 7.0.0, and 25% among them use the latest datasets version",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006744 / 0.011353 (-0.004609) | 0.004585 / 0.011008 (-0.006423) | 0.097828 / 0.038508 (0.059320) | 0.028230 / 0.023109 (0.005121) | 0.302190 / 0.275898 (0.026292) | 0.335022 / 0.323480 (0.011542) | 0.005107 / 0.007986 (-0.002878) | 0.004648 / 0.004328 (0.000320) | 0.076842 / 0.004250 (0.072592) | 0.038291 / 0.037052 (0.001239) | 0.313286 / 0.258489 (0.054797) | 0.342534 / 0.293841 (0.048693) | 0.031325 / 0.128546 (-0.097221) | 0.011632 / 0.075646 (-0.064014) | 0.321879 / 0.419271 (-0.097392) | 0.042204 / 0.043533 (-0.001329) | 0.304442 / 0.255139 (0.049303) | 0.330912 / 0.283200 (0.047712) | 0.085446 / 0.141683 (-0.056237) | 1.469990 / 1.452155 (0.017835) | 1.551147 / 1.492716 (0.058431) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185961 / 0.018006 (0.167955) | 0.404675 / 0.000490 (0.404186) | 0.003212 / 0.000200 (0.003012) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023876 / 0.037411 (-0.013535) | 0.097820 / 0.014526 (0.083295) | 0.107382 / 0.176557 (-0.069174) | 0.167598 / 0.737135 (-0.569537) | 0.108789 / 0.296338 (-0.187550) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455004 / 0.215209 (0.239795) | 4.529104 / 2.077655 (2.451449) | 2.180068 / 1.504120 (0.675948) | 1.982109 / 1.541195 (0.440914) | 2.041856 / 1.468490 (0.573366) | 0.702029 / 4.584777 (-3.882747) | 3.368613 / 3.745712 (-0.377099) | 1.932303 / 5.269862 (-3.337559) | 1.278340 / 4.565676 (-3.287336) | 0.082836 / 0.424275 (-0.341439) | 0.012349 / 0.007607 (0.004742) | 0.548197 / 0.226044 (0.322153) | 5.509982 / 2.268929 (3.241053) | 2.612889 / 55.444624 (-52.831736) | 2.278157 / 6.876477 (-4.598320) | 2.386923 / 2.142072 (0.244851) | 0.803332 / 4.805227 (-4.001896) | 0.151222 / 6.500664 (-6.349442) | 0.066673 / 0.075469 (-0.008796) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209453 / 1.841788 (-0.632335) | 13.649733 / 8.074308 (5.575424) | 14.065917 / 10.191392 (3.874525) | 0.128872 / 0.680424 (-0.551551) | 0.016773 / 0.534201 (-0.517428) | 0.385475 / 0.579283 (-0.193809) | 0.386208 / 0.434364 (-0.048156) | 0.475144 / 0.540337 (-0.065194) | 0.564183 / 1.386936 (-0.822753) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004433 / 0.011008 (-0.006575) | 0.076008 / 0.038508 (0.037500) | 0.027471 / 0.023109 (0.004362) | 0.339837 / 0.275898 (0.063939) | 0.376857 / 0.323480 (0.053377) | 0.004930 / 0.007986 (-0.003055) | 0.003312 / 0.004328 (-0.001016) | 0.075070 / 0.004250 (0.070820) | 0.035897 / 0.037052 (-0.001156) | 0.342398 / 0.258489 (0.083909) | 0.380202 / 0.293841 (0.086361) | 0.031781 / 0.128546 (-0.096766) | 0.011697 / 0.075646 (-0.063950) | 0.085926 / 0.419271 (-0.333345) | 0.041599 / 0.043533 (-0.001934) | 0.343098 / 0.255139 (0.087959) | 0.371275 / 0.283200 (0.088076) | 0.090489 / 0.141683 (-0.051194) | 1.483738 / 1.452155 (0.031584) | 1.554973 / 1.492716 (0.062256) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183703 / 0.018006 (0.165697) | 0.395105 / 0.000490 (0.394616) | 0.002162 / 0.000200 (0.001963) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025432 / 0.037411 (-0.011979) | 0.101322 / 0.014526 (0.086796) | 0.107839 / 0.176557 (-0.068718) | 0.160328 / 0.737135 (-0.576807) | 0.109899 / 0.296338 (-0.186440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448001 / 0.215209 (0.232792) | 4.485321 / 2.077655 (2.407666) | 2.157064 / 1.504120 (0.652944) | 1.966141 / 1.541195 (0.424947) | 2.032808 / 1.468490 (0.564318) | 0.705684 / 4.584777 (-3.879093) | 3.359802 / 3.745712 (-0.385910) | 2.694952 / 5.269862 (-2.574910) | 1.471309 / 4.565676 (-3.094368) | 0.084185 / 0.424275 (-0.340090) | 0.012330 / 0.007607 (0.004723) | 0.554083 / 0.226044 (0.328038) | 5.569137 / 2.268929 (3.300208) | 2.586009 / 55.444624 (-52.858615) | 2.234920 / 6.876477 (-4.641557) | 2.285128 / 2.142072 (0.143056) | 0.818825 / 4.805227 (-3.986402) | 0.152604 / 6.500664 (-6.348060) | 0.067722 / 0.075469 (-0.007747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305571 / 1.841788 (-0.536217) | 13.687471 / 8.074308 (5.613163) | 13.305401 / 10.191392 (3.114009) | 0.140477 / 0.680424 (-0.539947) | 0.018138 / 0.534201 (-0.516063) | 0.377255 / 0.579283 (-0.202028) | 0.379522 / 0.434364 (-0.054842) | 0.458489 / 0.540337 (-0.081849) | 0.543767 / 1.386936 (-0.843169) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009606 / 0.011353 (-0.001747) | 0.006795 / 0.011008 (-0.004213) | 0.133738 / 0.038508 (0.095230) | 0.043379 / 0.023109 (0.020270) | 0.412917 / 0.275898 (0.137019) | 0.418790 / 0.323480 (0.095310) | 0.007290 / 0.007986 (-0.000696) | 0.004960 / 0.004328 (0.000632) | 0.095496 / 0.004250 (0.091246) | 0.057607 / 0.037052 (0.020555) | 0.402638 / 0.258489 (0.144149) | 0.436206 / 0.293841 (0.142365) | 0.056023 / 0.128546 (-0.072523) | 0.019909 / 0.075646 (-0.055737) | 0.463958 / 0.419271 (0.044687) | 0.064073 / 0.043533 (0.020541) | 0.398337 / 0.255139 (0.143198) | 0.421786 / 0.283200 (0.138586) | 0.131563 / 0.141683 (-0.010120) | 1.840217 / 1.452155 (0.388063) | 1.912013 / 1.492716 (0.419296) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230519 / 0.018006 (0.212513) | 0.550506 / 0.000490 (0.550017) | 0.003649 / 0.000200 (0.003449) | 0.000107 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029713 / 0.037411 (-0.007698) | 0.129913 / 0.014526 (0.115387) | 0.131543 / 0.176557 (-0.045013) | 0.203571 / 0.737135 (-0.533565) | 0.141483 / 0.296338 (-0.154856) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.626383 / 0.215209 (0.411174) | 6.193043 / 2.077655 (4.115388) | 2.442728 / 1.504120 (0.938608) | 2.079049 / 1.541195 (0.537855) | 2.117761 / 1.468490 (0.649271) | 1.315296 / 4.584777 (-3.269481) | 5.643709 / 3.745712 (1.897997) | 5.245789 / 5.269862 (-0.024073) | 2.757442 / 4.565676 (-1.808235) | 0.151655 / 0.424275 (-0.272620) | 0.014686 / 0.007607 (0.007079) | 0.779937 / 0.226044 (0.553893) | 7.796685 / 2.268929 (5.527756) | 3.349580 / 55.444624 (-52.095045) | 2.493750 / 6.876477 (-4.382727) | 2.506200 / 2.142072 (0.364128) | 1.534964 / 4.805227 (-3.270263) | 0.260001 / 6.500664 (-6.240663) | 0.080543 / 0.075469 (0.005074) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541940 / 1.841788 (-0.299848) | 17.851935 / 8.074308 (9.777627) | 22.418859 / 10.191392 (12.227467) | 0.258602 / 0.680424 (-0.421822) | 0.027679 / 0.534201 (-0.506522) | 0.548379 / 0.579283 (-0.030904) | 0.625505 / 0.434364 (0.191141) | 0.664074 / 0.540337 (0.123737) | 0.797418 / 1.386936 (-0.589518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009800 / 0.011353 (-0.001553) | 0.006178 / 0.011008 (-0.004830) | 0.105667 / 0.038508 (0.067159) | 0.039380 / 0.023109 (0.016271) | 0.419528 / 0.275898 (0.143630) | 0.469857 / 0.323480 (0.146377) | 0.006672 / 0.007986 (-0.001314) | 0.004745 / 0.004328 (0.000417) | 0.101647 / 0.004250 (0.097397) | 0.048531 / 0.037052 (0.011478) | 0.433364 / 0.258489 (0.174875) | 0.459719 / 0.293841 (0.165878) | 0.054291 / 0.128546 (-0.074256) | 0.020406 / 0.075646 (-0.055240) | 0.122321 / 0.419271 (-0.296951) | 0.059719 / 0.043533 (0.016186) | 0.416083 / 0.255139 (0.160944) | 0.455277 / 0.283200 (0.172077) | 0.119342 / 0.141683 (-0.022341) | 1.862544 / 1.452155 (0.410390) | 2.001428 / 1.492716 (0.508712) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.240951 / 0.018006 (0.222945) | 0.516958 / 0.000490 (0.516468) | 0.000449 / 0.000200 (0.000249) | 0.000092 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032725 / 0.037411 (-0.004686) | 0.130291 / 0.014526 (0.115765) | 0.139834 / 0.176557 (-0.036723) | 0.214995 / 0.737135 (-0.522140) | 0.150925 / 0.296338 (-0.145414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652062 / 0.215209 (0.436853) | 6.584447 / 2.077655 (4.506793) | 2.654838 / 1.504120 (1.150718) | 2.297209 / 1.541195 (0.756015) | 2.420394 / 1.468490 (0.951904) | 1.299285 / 4.584777 (-3.285492) | 5.605849 / 3.745712 (1.860137) | 3.166103 / 5.269862 (-2.103759) | 2.138123 / 4.565676 (-2.427554) | 0.152562 / 0.424275 (-0.271713) | 0.015499 / 0.007607 (0.007892) | 0.816300 / 0.226044 (0.590256) | 8.308746 / 2.268929 (6.039817) | 3.482982 / 55.444624 (-51.961642) | 2.689247 / 6.876477 (-4.187229) | 2.792728 / 2.142072 (0.650656) | 1.566320 / 4.805227 (-3.238907) | 0.264110 / 6.500664 (-6.236554) | 0.083652 / 0.075469 (0.008183) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.643027 / 1.841788 (-0.198760) | 18.612349 / 8.074308 (10.538041) | 19.460644 / 10.191392 (9.269252) | 0.260795 / 0.680424 (-0.419629) | 0.026050 / 0.534201 (-0.508151) | 0.539750 / 0.579283 (-0.039533) | 0.620791 / 0.434364 (0.186428) | 0.645023 / 0.540337 (0.104686) | 0.765604 / 1.386936 (-0.621332) |\n\n</details>\n</details>\n\n\n",
"ready for re-review :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006388 / 0.011353 (-0.004965) | 0.004469 / 0.011008 (-0.006540) | 0.097082 / 0.038508 (0.058573) | 0.028005 / 0.023109 (0.004895) | 0.364797 / 0.275898 (0.088899) | 0.399671 / 0.323480 (0.076191) | 0.005062 / 0.007986 (-0.002923) | 0.004580 / 0.004328 (0.000252) | 0.075670 / 0.004250 (0.071420) | 0.038328 / 0.037052 (0.001276) | 0.365948 / 0.258489 (0.107459) | 0.402631 / 0.293841 (0.108790) | 0.031378 / 0.128546 (-0.097168) | 0.011443 / 0.075646 (-0.064203) | 0.321590 / 0.419271 (-0.097682) | 0.042263 / 0.043533 (-0.001270) | 0.368238 / 0.255139 (0.113099) | 0.389928 / 0.283200 (0.106728) | 0.085203 / 0.141683 (-0.056480) | 1.462820 / 1.452155 (0.010665) | 1.529207 / 1.492716 (0.036490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197194 / 0.018006 (0.179188) | 0.410897 / 0.000490 (0.410407) | 0.003394 / 0.000200 (0.003194) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022911 / 0.037411 (-0.014500) | 0.097012 / 0.014526 (0.082486) | 0.102247 / 0.176557 (-0.074309) | 0.163363 / 0.737135 (-0.573772) | 0.106897 / 0.296338 (-0.189441) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416303 / 0.215209 (0.201094) | 4.159325 / 2.077655 (2.081671) | 1.844893 / 1.504120 (0.340773) | 1.646131 / 1.541195 (0.104936) | 1.706763 / 1.468490 (0.238273) | 0.699607 / 4.584777 (-3.885170) | 3.462048 / 3.745712 (-0.283664) | 1.939076 / 5.269862 (-3.330786) | 1.324744 / 4.565676 (-3.240932) | 0.082949 / 0.424275 (-0.341326) | 0.012327 / 0.007607 (0.004720) | 0.513812 / 0.226044 (0.287768) | 5.171021 / 2.268929 (2.902093) | 2.288039 / 55.444624 (-53.156585) | 1.957403 / 6.876477 (-4.919074) | 1.990060 / 2.142072 (-0.152013) | 0.805571 / 4.805227 (-3.999656) | 0.152641 / 6.500664 (-6.348023) | 0.068169 / 0.075469 (-0.007300) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.200624 / 1.841788 (-0.641164) | 13.836334 / 8.074308 (5.762026) | 14.065340 / 10.191392 (3.873948) | 0.143406 / 0.680424 (-0.537018) | 0.016709 / 0.534201 (-0.517492) | 0.380080 / 0.579283 (-0.199204) | 0.398414 / 0.434364 (-0.035950) | 0.479192 / 0.540337 (-0.061145) | 0.572508 / 1.386936 (-0.814428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006622 / 0.011353 (-0.004731) | 0.004511 / 0.011008 (-0.006497) | 0.076454 / 0.038508 (0.037946) | 0.027431 / 0.023109 (0.004322) | 0.339041 / 0.275898 (0.063143) | 0.375691 / 0.323480 (0.052211) | 0.004854 / 0.007986 (-0.003131) | 0.004654 / 0.004328 (0.000325) | 0.075300 / 0.004250 (0.071049) | 0.036469 / 0.037052 (-0.000583) | 0.341357 / 0.258489 (0.082868) | 0.381561 / 0.293841 (0.087720) | 0.031754 / 0.128546 (-0.096792) | 0.011544 / 0.075646 (-0.064102) | 0.085956 / 0.419271 (-0.333315) | 0.041704 / 0.043533 (-0.001828) | 0.340088 / 0.255139 (0.084950) | 0.364037 / 0.283200 (0.080838) | 0.091016 / 0.141683 (-0.050667) | 1.483515 / 1.452155 (0.031360) | 1.562878 / 1.492716 (0.070162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228019 / 0.018006 (0.210013) | 0.404809 / 0.000490 (0.404320) | 0.000384 / 0.000200 (0.000184) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025230 / 0.037411 (-0.012181) | 0.099790 / 0.014526 (0.085264) | 0.107923 / 0.176557 (-0.068634) | 0.157651 / 0.737135 (-0.579484) | 0.112525 / 0.296338 (-0.183813) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440360 / 0.215209 (0.225151) | 4.387749 / 2.077655 (2.310094) | 2.077592 / 1.504120 (0.573472) | 1.872532 / 1.541195 (0.331337) | 1.941607 / 1.468490 (0.473117) | 0.699394 / 4.584777 (-3.885383) | 3.411210 / 3.745712 (-0.334502) | 1.901816 / 5.269862 (-3.368046) | 1.177042 / 4.565676 (-3.388634) | 0.083536 / 0.424275 (-0.340739) | 0.012418 / 0.007607 (0.004811) | 0.548463 / 0.226044 (0.322419) | 5.487107 / 2.268929 (3.218178) | 2.548076 / 55.444624 (-52.896548) | 2.215012 / 6.876477 (-4.661465) | 2.253472 / 2.142072 (0.111400) | 0.812925 / 4.805227 (-3.992302) | 0.152935 / 6.500664 (-6.347729) | 0.068144 / 0.075469 (-0.007325) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267914 / 1.841788 (-0.573873) | 14.015185 / 8.074308 (5.940877) | 13.153967 / 10.191392 (2.962575) | 0.140666 / 0.680424 (-0.539758) | 0.016718 / 0.534201 (-0.517483) | 0.383411 / 0.579283 (-0.195872) | 0.395424 / 0.434364 (-0.038940) | 0.466069 / 0.540337 (-0.074269) | 0.553825 / 1.386936 (-0.833111) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007463 / 0.011353 (-0.003890) | 0.005017 / 0.011008 (-0.005991) | 0.098777 / 0.038508 (0.060269) | 0.033859 / 0.023109 (0.010750) | 0.298569 / 0.275898 (0.022670) | 0.343717 / 0.323480 (0.020237) | 0.005806 / 0.007986 (-0.002180) | 0.005403 / 0.004328 (0.001074) | 0.075840 / 0.004250 (0.071590) | 0.046539 / 0.037052 (0.009487) | 0.300058 / 0.258489 (0.041569) | 0.345036 / 0.293841 (0.051195) | 0.036258 / 0.128546 (-0.092288) | 0.011992 / 0.075646 (-0.063654) | 0.334986 / 0.419271 (-0.084286) | 0.050427 / 0.043533 (0.006894) | 0.295319 / 0.255139 (0.040180) | 0.318980 / 0.283200 (0.035780) | 0.098407 / 0.141683 (-0.043276) | 1.437626 / 1.452155 (-0.014529) | 1.562548 / 1.492716 (0.069832) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231502 / 0.018006 (0.213496) | 0.441550 / 0.000490 (0.441060) | 0.005863 / 0.000200 (0.005663) | 0.000724 / 0.000054 (0.000670) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027501 / 0.037411 (-0.009911) | 0.111490 / 0.014526 (0.096964) | 0.117503 / 0.176557 (-0.059054) | 0.173849 / 0.737135 (-0.563286) | 0.124521 / 0.296338 (-0.171818) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419266 / 0.215209 (0.204057) | 4.170337 / 2.077655 (2.092683) | 2.015883 / 1.504120 (0.511763) | 1.832683 / 1.541195 (0.291488) | 1.950195 / 1.468490 (0.481705) | 0.698150 / 4.584777 (-3.886627) | 3.775601 / 3.745712 (0.029889) | 2.094581 / 5.269862 (-3.175281) | 1.325437 / 4.565676 (-3.240240) | 0.085382 / 0.424275 (-0.338894) | 0.012151 / 0.007607 (0.004544) | 0.526441 / 0.226044 (0.300397) | 5.256124 / 2.268929 (2.987196) | 2.488408 / 55.444624 (-52.956216) | 2.157228 / 6.876477 (-4.719249) | 2.228991 / 2.142072 (0.086919) | 0.837002 / 4.805227 (-3.968225) | 0.167520 / 6.500664 (-6.333144) | 0.066435 / 0.075469 (-0.009035) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.174544 / 1.841788 (-0.667243) | 14.684207 / 8.074308 (6.609899) | 14.494676 / 10.191392 (4.303284) | 0.143423 / 0.680424 (-0.537001) | 0.017289 / 0.534201 (-0.516912) | 0.424727 / 0.579283 (-0.154556) | 0.417077 / 0.434364 (-0.017287) | 0.498955 / 0.540337 (-0.041383) | 0.584838 / 1.386936 (-0.802098) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007666 / 0.011353 (-0.003687) | 0.005269 / 0.011008 (-0.005739) | 0.073548 / 0.038508 (0.035040) | 0.033683 / 0.023109 (0.010573) | 0.342646 / 0.275898 (0.066747) | 0.380948 / 0.323480 (0.057468) | 0.005737 / 0.007986 (-0.002248) | 0.005366 / 0.004328 (0.001038) | 0.073228 / 0.004250 (0.068978) | 0.050065 / 0.037052 (0.013013) | 0.348593 / 0.258489 (0.090104) | 0.393930 / 0.293841 (0.100089) | 0.037411 / 0.128546 (-0.091135) | 0.012476 / 0.075646 (-0.063170) | 0.084884 / 0.419271 (-0.334387) | 0.049368 / 0.043533 (0.005835) | 0.343142 / 0.255139 (0.088003) | 0.362828 / 0.283200 (0.079628) | 0.102962 / 0.141683 (-0.038721) | 1.505703 / 1.452155 (0.053549) | 1.580695 / 1.492716 (0.087979) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207621 / 0.018006 (0.189615) | 0.437678 / 0.000490 (0.437188) | 0.003931 / 0.000200 (0.003731) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029079 / 0.037411 (-0.008332) | 0.108600 / 0.014526 (0.094074) | 0.124787 / 0.176557 (-0.051770) | 0.173354 / 0.737135 (-0.563781) | 0.126124 / 0.296338 (-0.170214) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427911 / 0.215209 (0.212702) | 4.254227 / 2.077655 (2.176572) | 2.052142 / 1.504120 (0.548022) | 1.857042 / 1.541195 (0.315848) | 1.965244 / 1.468490 (0.496754) | 0.707994 / 4.584777 (-3.876783) | 3.807593 / 3.745712 (0.061880) | 3.387588 / 5.269862 (-1.882274) | 1.844853 / 4.565676 (-2.720824) | 0.088548 / 0.424275 (-0.335727) | 0.012398 / 0.007607 (0.004791) | 0.565896 / 0.226044 (0.339851) | 5.228024 / 2.268929 (2.959095) | 2.467220 / 55.444624 (-52.977405) | 2.144413 / 6.876477 (-4.732064) | 2.214049 / 2.142072 (0.071977) | 0.869381 / 4.805227 (-3.935846) | 0.170991 / 6.500664 (-6.329673) | 0.064932 / 0.075469 (-0.010537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.246661 / 1.841788 (-0.595127) | 14.902743 / 8.074308 (6.828435) | 13.264294 / 10.191392 (3.072902) | 0.165328 / 0.680424 (-0.515095) | 0.017567 / 0.534201 (-0.516634) | 0.425491 / 0.579283 (-0.153792) | 0.427327 / 0.434364 (-0.007037) | 0.526475 / 0.540337 (-0.013862) | 0.627309 / 1.386936 (-0.759627) |\n\n</details>\n</details>\n\n\n"
] | 2023-03-07T13:31:53Z
| 2023-03-08T14:01:27Z
| 2023-03-08T13:54:22Z
|
MEMBER
| null | null | null |
Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0):
```python
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'.
A suitable version of pyarrow or fastparquet is required for parquet support.
Trying to import the above resulted in these errors:
- Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed).
- Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet.
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_on_disk - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'.
A suitable version of pyarrow or fastparquet is required for parquet support.
Trying to import the above resulted in these errors:
- Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed).
- Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet.
===== 2 failed, 2137 passed, 18 skipped, 32 warnings in 212.76s (0:03:32) ======
```
EDIT: also for performance - with 8.0 we can use `.to_reader()`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5620/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5620/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/5620.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5620",
"merged_at": "2023-03-08T13:54:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5620.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5620"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6636
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/6636/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/6636/comments
|
https://api.github.com/repos/huggingface/datasets/issues/6636/events
|
https://github.com/huggingface/datasets/pull/6636
| 2,110,781,097
|
PR_kwDODunzps5lm4zI
| 6,636
|
Faster column validation and reordering
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6636). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @mariosasko, I made the changes. However, I did some tests with `map` and I still saw that it took ~3.5 minutes per batch on 6000 features when using `dataset.map(lambda x: x, batched=True)`. From the profile, the culprits were mainly with `ArrowWriter.write_batch` and `ArrowWriter._build_writer`. The slow down from `_build_writer` is due to updating existing features with the inferred ones. I don't think this can be optimized any further, but fortunately, I can avoid this by setting the `features` in `map`. On the other hand, `write_batch` selects cols based on intersection and difference between schema names and example keys using two for loops. The same exists in `ArrowWriter.write_examples_on_file`. Optimizing the column selection using set operations effectively brings it from 3.5 minutes per batch down to 6 seconds per batch. Can we add these changes along with this PR?\r\n\r\nEdit: Ah just realized you can avoid the issue with inferring features altogether when you set the format to arrow (or pandas).",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004990 / 0.011353 (-0.006363) | 0.003138 / 0.011008 (-0.007870) | 0.062368 / 0.038508 (0.023860) | 0.028634 / 0.023109 (0.005524) | 0.241297 / 0.275898 (-0.034601) | 0.264433 / 0.323480 (-0.059047) | 0.003133 / 0.007986 (-0.004852) | 0.003444 / 0.004328 (-0.000885) | 0.048522 / 0.004250 (0.044271) | 0.043700 / 0.037052 (0.006648) | 0.257054 / 0.258489 (-0.001435) | 0.277551 / 0.293841 (-0.016290) | 0.027132 / 0.128546 (-0.101414) | 0.010395 / 0.075646 (-0.065251) | 0.208003 / 0.419271 (-0.211269) | 0.035814 / 0.043533 (-0.007719) | 0.250098 / 0.255139 (-0.005041) | 0.266726 / 0.283200 (-0.016474) | 0.018424 / 0.141683 (-0.123259) | 1.129242 / 1.452155 (-0.322912) | 1.167674 / 1.492716 (-0.325042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091808 / 0.018006 (0.073802) | 0.298726 / 0.000490 (0.298236) | 0.000219 / 0.000200 (0.000019) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019119 / 0.037411 (-0.018292) | 0.061969 / 0.014526 (0.047443) | 0.073392 / 0.176557 (-0.103165) | 0.119460 / 0.737135 (-0.617675) | 0.074072 / 0.296338 (-0.222266) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281435 / 0.215209 (0.066226) | 2.702094 / 2.077655 (0.624439) | 1.411541 / 1.504120 (-0.092579) | 1.284084 / 1.541195 (-0.257111) | 1.302638 / 1.468490 (-0.165852) | 0.562420 / 4.584777 (-4.022357) | 2.364890 / 3.745712 (-1.380822) | 2.744033 / 5.269862 (-2.525828) | 1.699000 / 4.565676 (-2.866677) | 0.062315 / 0.424275 (-0.361961) | 0.004982 / 0.007607 (-0.002625) | 0.334385 / 0.226044 (0.108341) | 3.203268 / 2.268929 (0.934339) | 1.766998 / 55.444624 (-53.677627) | 1.497164 / 6.876477 (-5.379313) | 1.509996 / 2.142072 (-0.632077) | 0.633014 / 4.805227 (-4.172213) | 0.115317 / 6.500664 (-6.385347) | 0.041120 / 0.075469 (-0.034349) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965877 / 1.841788 (-0.875911) | 11.219909 / 8.074308 (3.145601) | 9.333822 / 10.191392 (-0.857570) | 0.136482 / 0.680424 (-0.543941) | 0.013632 / 0.534201 (-0.520569) | 0.287251 / 0.579283 (-0.292032) | 0.262786 / 0.434364 (-0.171578) | 0.322893 / 0.540337 (-0.217444) | 0.418180 / 1.386936 (-0.968756) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005444 / 0.011353 (-0.005909) | 0.003147 / 0.011008 (-0.007862) | 0.049242 / 0.038508 (0.010734) | 0.030944 / 0.023109 (0.007834) | 0.281901 / 0.275898 (0.006003) | 0.303820 / 0.323480 (-0.019660) | 0.004326 / 0.007986 (-0.003659) | 0.002696 / 0.004328 (-0.001632) | 0.048306 / 0.004250 (0.044055) | 0.044145 / 0.037052 (0.007093) | 0.297253 / 0.258489 (0.038764) | 0.324062 / 0.293841 (0.030221) | 0.046724 / 0.128546 (-0.081823) | 0.010079 / 0.075646 (-0.065567) | 0.057635 / 0.419271 (-0.361636) | 0.033621 / 0.043533 (-0.009912) | 0.282303 / 0.255139 (0.027164) | 0.300761 / 0.283200 (0.017561) | 0.017116 / 0.141683 (-0.124567) | 1.156519 / 1.452155 (-0.295636) | 1.216087 / 1.492716 (-0.276630) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093011 / 0.018006 (0.075005) | 0.301310 / 0.000490 (0.300820) | 0.000223 / 0.000200 (0.000023) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023112 / 0.037411 (-0.014299) | 0.075192 / 0.014526 (0.060666) | 0.086213 / 0.176557 (-0.090343) | 0.125853 / 0.737135 (-0.611282) | 0.087754 / 0.296338 (-0.208585) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301095 / 0.215209 (0.085886) | 2.911769 / 2.077655 (0.834114) | 1.614708 / 1.504120 (0.110588) | 1.494497 / 1.541195 (-0.046698) | 1.506978 / 1.468490 (0.038488) | 0.572743 / 4.584777 (-4.012034) | 2.417142 / 3.745712 (-1.328570) | 2.755338 / 5.269862 (-2.514523) | 1.711026 / 4.565676 (-2.854650) | 0.062732 / 0.424275 (-0.361543) | 0.005031 / 0.007607 (-0.002576) | 0.352343 / 0.226044 (0.126298) | 3.465183 / 2.268929 (1.196255) | 1.958795 / 55.444624 (-53.485829) | 1.682239 / 6.876477 (-5.194238) | 1.688897 / 2.142072 (-0.453176) | 0.643311 / 4.805227 (-4.161916) | 0.115426 / 6.500664 (-6.385238) | 0.040338 / 0.075469 (-0.035131) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.005322 / 1.841788 (-0.836466) | 11.779380 / 8.074308 (3.705072) | 10.041574 / 10.191392 (-0.149818) | 0.127617 / 0.680424 (-0.552807) | 0.015840 / 0.534201 (-0.518361) | 0.286905 / 0.579283 (-0.292378) | 0.275180 / 0.434364 (-0.159183) | 0.332498 / 0.540337 (-0.207840) | 0.410719 / 1.386936 (-0.976217) |\n\n</details>\n</details>\n\n\n"
] | 2024-01-31T19:08:28Z
| 2024-02-07T19:39:00Z
| 2024-02-06T23:03:38Z
|
CONTRIBUTOR
| null | null | null |
I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit is when the following check is performed: `any(col not in self._data.column_names for col in columns)`. Replacing this by `set(columns) - (self._data.column_names)` is more efficient.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6636/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/6636/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/6636.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6636",
"merged_at": "2024-02-06T23:03:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6636.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6636"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5728
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5728/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5728/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5728/events
|
https://github.com/huggingface/datasets/issues/5728
| 1,661,925,932
|
I_kwDODunzps5jDvos
| 5,728
|
The order of data split names is nondeterministic
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] |
closed
| false
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
[
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null |
[] | 2023-04-11T07:31:25Z
| 2023-04-26T15:05:13Z
| 2023-04-26T15:05:13Z
|
MEMBER
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718
```
FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']
At index 0 diff: 'random' != 'train'
Full diff:
- ['train', 'random']
+ ['random', 'train']
```
I have checked locally and found out that the data split order is nondeterministic.
This is caused by the use of `set` for sharded splits.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5728/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5728/timeline
| null |
completed
| null | null |
https://api.github.com/repos/huggingface/datasets/issues/5838
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/5838/comments
|
https://api.github.com/repos/huggingface/datasets/issues/5838/events
|
https://github.com/huggingface/datasets/issues/5838
| 1,703,210,848
|
I_kwDODunzps5lhO9g
| 5,838
|
Streaming support for `load_from_disk`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nilabhra",
"id": 5437792,
"login": "Nilabhra",
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nilabhra",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[
"As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ",
"@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?",
"Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it / stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).",
"@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.",
"@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?",
"Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?",
"@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3://<bucket name>/<data folder>/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~/.../datasets/src/datasets/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~/.../datasets/src/datasets/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~/.../datasets/src/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:444, in <listcomp>(.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in <listcomp>(.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:115, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```",
"Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n",
"@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.",
"Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway",
"> As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n> \r\n> There is a discussion on streaming data from S3 here though: #5281\r\n\r\nHi @lhoestq,\r\n\r\nThanks for your answer here! I would like to know if it is possible to use `load_from_disk` from S3 without downloading it locally. For now my dataset is quite large, and my local machine doesn't have such big storage.",
"Hi ! Have you considered hosting your dataset on HF instead ? This way you can use `load_dataset` with `streaming=True` (which is not available in load_from_disk which is for memory mapping Arrow files on disk)"
] | 2023-05-10T06:25:22Z
| 2024-10-28T14:19:44Z
| 2023-05-12T09:37:45Z
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nilabhra",
"id": 5437792,
"login": "Nilabhra",
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nilabhra",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/5838/timeline
| null |
completed
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.