id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
3,156,136,624
https://api.github.com/repos/huggingface/datasets/issues/7624
https://github.com/huggingface/datasets/issues/7624
7,624
#Dataset Make "image" column appear first in dataset preview UI
closed
2
2025-06-18T09:25:19
2025-06-20T07:46:43
2025-06-20T07:46:43
jcerveto
[]
Hi! #Dataset I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub. However, at the moment, the `"image"` column is not the firstβ€”in fact, it appears last, which is not ideal for the presentation I’d like to achieve. I have a couple of questions: Is there a way to force the dataset card to display the `"image"` column first? Is there currently any way to control or influence the column order in the dataset preview UI? Does the order of keys in the .jsonl file or the features argument affect the display order? Thanks again for your time and help! :blush:
false
3,154,519,684
https://api.github.com/repos/huggingface/datasets/issues/7623
https://github.com/huggingface/datasets/pull/7623
7,623
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
closed
2
2025-06-17T19:16:34
2025-06-18T14:18:41
2025-06-18T14:18:41
ArjunJagdale
[]
### Related Issues/PRs Fixes #6152 --- ### What changes are proposed in this pull request? This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.). --- ### Why this change? Previously, when calling: ```python load_dataset("audiofolder") ```` without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to: * Long loading times * Unexpected behavior (e.g., scanning unrelated files) This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method β€” keeping the logic localized to the specific builder instead of a generic loader function. --- ### How is this PR tested? * βœ… Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` β†’ a `ValueError` is now raised early. * βœ… Existing functionality (with valid input) remains unaffected. --- ### Does this PR require documentation update? * [x] No --- ### Release Notes #### Is this a user-facing change? * [x] Yes > Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory. --- #### What component(s) does this PR affect? * [x] `area/datasets` * [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified? * [x] `rn/bug-fix` - A user-facing bug fix --- #### Should this be included in the next patch release? * [x] Yes
true
3,154,398,557
https://api.github.com/repos/huggingface/datasets/issues/7622
https://github.com/huggingface/datasets/pull/7622
7,622
Guard against duplicate builder_kwargs/config_kwargs in load_dataset_…
closed
1
2025-06-17T18:28:35
2025-07-23T14:06:20
2025-07-23T14:06:20
Shohail-Ismail
[]
…builder (#4910 ) ### What does this PR do? Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`. ### Implementation details - Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` and `config_kwargs` - Wrote a unit test in `tests/test_load_duplicate_keys.py` to verify the exception is raised correctly ### Fixes Closes #4910 ### Reviewers @zach-huggingface @SunMarc Would appreciate your review if you have time - thanks!
true
3,153,780,963
https://api.github.com/repos/huggingface/datasets/issues/7621
https://github.com/huggingface/datasets/pull/7621
7,621
minor docs data aug
closed
1
2025-06-17T14:46:57
2025-06-17T14:50:28
2025-06-17T14:47:11
lhoestq
[]
null
true
3,153,565,183
https://api.github.com/repos/huggingface/datasets/issues/7620
https://github.com/huggingface/datasets/pull/7620
7,620
Fixes in docs
closed
1
2025-06-17T13:41:54
2025-06-17T13:58:26
2025-06-17T13:58:24
lhoestq
[]
before release 4.0 (I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`)
true
3,153,058,517
https://api.github.com/repos/huggingface/datasets/issues/7619
https://github.com/huggingface/datasets/issues/7619
7,619
`from_list` fails while `from_generator` works for large datasets
open
4
2025-06-17T10:58:55
2025-06-29T16:34:44
null
abdulfatir
[]
### Describe the bug I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`. ### Steps to reproduce the bug #### Snippet A (crashes) ```py from tqdm.auto import tqdm import numpy as np import datasets def data_generator(): for i in tqdm(range(10_000_000)): length = np.random.randint(2048) series = np.random.rand(length) yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")} data_list = list(data_generator()) ds = datasets.Dataset.from_list(data_list) ``` The last line crashes with ``` ArrowInvalid: Value 2147483761 too large to fit in C integer type ``` #### Snippet B (works) ```py from tqdm.auto import tqdm import numpy as np import datasets def data_generator(): for i in tqdm(range(10_000_000)): length = np.random.randint(2048) series = np.random.rand(length) yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")} ds = datasets.Dataset.from_generator(data_generator) ``` ### Expected behavior I expected both the approaches to work or to fail similarly. ### Environment info ``` - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.32.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0 ```
false
3,148,912,897
https://api.github.com/repos/huggingface/datasets/issues/7618
https://github.com/huggingface/datasets/pull/7618
7,618
fix: raise error when folder-based datasets are loaded without data_dir or data_files
open
1
2025-06-16T07:43:59
2025-06-16T12:13:26
null
ArjunJagdale
[]
### Related Issues/PRs <!-- Uncomment 'Resolve' if this PR can close the linked items. --> <!-- Resolve --> #6152 --- ### What changes are proposed in this pull request? This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior. **Before this fix**: - When `data_dir` or `data_files` were not provided, the loader defaulted to the current working directory. - This caused unexpected behavior like: - Long loading times - Scanning unintended local files **Now**: - If both `data_dir` and `data_files` are missing, a `ValueError` is raised early with a helpful message. --- ### How is this PR tested? - [x] Manual test via `load_dataset("audiofolder")` with missing `data_dir` - [ ] Existing unit tests (should not break any) - [ ] New tests (if needed, maintainers can guide) --- ### Does this PR require documentation update? - [x] No. You can skip the rest of this section. --- ### Release Notes #### Is this a user-facing change? - [x] Yes. Give a description of this change to be included in the release notes for users. > Adds early error handling for folder-based datasets when neither `data_dir` nor `data_files` is specified, avoiding unintended resolution to the current directory. #### What component(s), interfaces, languages, and integrations does this PR affect? Components: - [x] `area/datasets` - [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified in the release notes? Choose one: - [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes --- #### Should this PR be included in the next patch release? - [x] Yes (this PR will be cherry-picked and included in the next patch release)
true
3,148,102,085
https://api.github.com/repos/huggingface/datasets/issues/7617
https://github.com/huggingface/datasets/issues/7617
7,617
Unwanted column padding in nested lists of dicts
closed
1
2025-06-15T22:06:17
2025-06-16T13:43:31
2025-06-16T13:43:31
qgallouedec
[]
```python from datasets import Dataset dataset = Dataset.from_dict({ "messages": [ [ {"a": "...",}, {"b": "...",}, ], ] }) print(dataset[0]) ``` What I get: ``` {'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]} ``` What I want: ``` {'messages': [{'a': '...'}, {'b': '...'}]} ``` Is there an easy way to automatically remove these auto-filled null/none values? If not, I probably need a recursive none exclusion function, don't I? Datasets 3.6.0
false
3,144,506,665
https://api.github.com/repos/huggingface/datasets/issues/7616
https://github.com/huggingface/datasets/pull/7616
7,616
Torchcodec decoding
closed
5
2025-06-13T19:06:07
2025-06-19T18:25:49
2025-06-19T18:25:49
TyTodd
[]
Closes #7607 ## New signatures ### Audio ```python Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None) Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict Audio.decode_example(self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None) -> "AudioDecoder": ``` ### Video ```python Video(decode: bool = True, stream_index: Optional[int] = None, dimension_order: Literal['NCHW', 'NHWC'] = 'NCHW', num_ffmpeg_threads: int = 1, device: Optional[Union[str, "torch.device"]] = 'cpu', seek_mode: Literal['exact', 'approximate'] = 'exact') Video.encode_example(self, value: Union[str, bytes, bytearray, Example, np.ndarray, "VideoDecoder"]) -> Example: Video.decode_example(self, value: Union[str, Example], token_per_repo_id: Optional[dict[str, Union[bool, str]]] = None, ) -> "VideoDecoder": ``` ## Notes Audio features constructor takes in 1 new optional param stream_index which is passed to the AudioDecoder constructor to select the stream index of a file. Audio feature can now take in torchcodec.decoders.AudioDecoder as input to encode_example() Audio feature decode_example() returns torchcodec.decoders.AudioDecoder Video feature constructor takes in 5 new optional params stream_index, dimension_order, num_ffmpeg_threads, device, seek_mode all of which are passed to VideoDecoder constructor Video feature decode_example() returns torchcodec.decoders.VideoDecoder Video feature can now take in torchcodec.decoders.VideoDecoder as input to encode_example() All test cases have been updated to reflect these changes All documentation has also been updated to reflect these changes. Both VideoDecoder and AudioDecoder when formatted with (np_formatter, tf_formatter, etc) will ignore the type and return themselves. Formatting test cases were updated accordingly to reflect this. (Pretty simple to make this not the case if we want though) ## Errors This test case from `tests/packaged_modules/test_audiofolder.py` ```python @require_librosa @require_sndfile @pytest.mark.parametrize("streaming", [False, True]) def test_data_files_with_metadata_and_archives(streaming, cache_dir, data_files_with_zip_archives): audiofolder = AudioFolder(data_files=data_files_with_zip_archives, cache_dir=cache_dir) audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() for split, data_files in data_files_with_zip_archives.items(): num_of_archives = len(data_files) # the metadata file is inside the archive expected_num_of_audios = 2 * num_of_archives assert split in datasets dataset = list(datasets[split]) assert len(dataset) == expected_num_of_audios # make sure each sample has its own audio (all arrays are different) and metadata assert ( sum(np.array_equal(dataset[0]["audio"].get_all_samples().data.numpy(), example["audio"].get_all_samples().data.numpy()) for example in dataset[1:]) == 0 ) assert len({example["text"] for example in dataset}) == expected_num_of_audios assert all(example["text"] is not None for example in dataset) ``` Fails now because AudioDecoder needs to access the files after the lines below are run, but there seems to be some context issues. The file the decoder is trying to read is closed before the decoder gets the chance to decode it. ```python audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() ```
true
3,143,443,498
https://api.github.com/repos/huggingface/datasets/issues/7615
https://github.com/huggingface/datasets/pull/7615
7,615
remove unused code
closed
1
2025-06-13T12:37:30
2025-06-13T12:39:59
2025-06-13T12:37:40
lhoestq
[]
null
true
3,143,381,638
https://api.github.com/repos/huggingface/datasets/issues/7614
https://github.com/huggingface/datasets/pull/7614
7,614
Lazy column
closed
1
2025-06-13T12:12:57
2025-06-17T13:08:51
2025-06-17T13:08:49
lhoestq
[]
Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI e.g. `ds[col]` now returns a lazy Column instead of a list This way calling `ds[col][idx]` only loads the required data in memory (bonus: also supports subfields access with `ds[col][subcol][idx]`) the breaking change will be for the next major release, which also includes removal of dataset scripts support close https://github.com/huggingface/datasets/issues/4180
true
3,142,819,991
https://api.github.com/repos/huggingface/datasets/issues/7613
https://github.com/huggingface/datasets/pull/7613
7,613
fix parallel push_to_hub in dataset_dict
closed
1
2025-06-13T09:02:24
2025-06-13T12:30:23
2025-06-13T12:30:22
lhoestq
[]
null
true
3,141,905,049
https://api.github.com/repos/huggingface/datasets/issues/7612
https://github.com/huggingface/datasets/issues/7612
7,612
Provide an option of robust dataset iterator with error handling
open
2
2025-06-13T00:40:48
2025-06-24T16:52:30
null
wwwjn
[ "enhancement" ]
### Feature request Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again. The way I try to do error handling is: (This doesn't work, unfortunately) ``` # Load the dataset with streaming enabled dataset = load_dataset( "pixparse/cc12m-wds", split="train", streaming=True ) # Get an iterator from the dataset iterator = iter(dataset) while True: try: # Try to get the next example example = next(iterator) # Try to access and process the image image = example["jpg"] pil_image = Image.fromarray(np.array(image)) pil_image.verify() # Verify it's a valid image file except StopIteration: # Code path 1 print("\nStopIteration was raised! Reach the end of dataset") raise StopIteration except Exception as e: # Code path 2 errors += 1 print("Error! Skip this sample") cotinue else: successful += 1 ``` This is because the `IterableDataset` already throws an error (reaches Code path 2). And if I continue call next(), it will hit Code path 1. This is because the inner iterator of `IterableDataset`([code](https://github.com/huggingface/datasets/blob/89bd1f971402acb62805ef110bc1059c38b1c8c6/src/datasets/iterable_dataset.py#L2242)) as been stopped, so calling next() on it will raise StopIteration. So I can not skip the corrupted data sample in this way. Would also love to hear any suggestions about creating a robust dataloader. Thanks for your help in advance! ### Motivation ## Public dataset corruption might be common A lot of users would use public dataset, and the public dataset might contains some corrupted data, especially for dataset with image / video etc. I totally understand it's dataset owner and user's responsibility to ensure the data integrity / run data cleaning or preprocessing, but it would be easier for developers who would use the dataset ## Use cases For example, a robust dataloader would be easy for users who want to try quick tests on different dataset, and chose one dataset which fits their needs. So user could use IterableDataloader with `stream=True` to use the dataset easily without downloading and removing corrupted data samples from the dataset. ### Your contribution The error handling might not trivial and might need more careful design.
false
3,141,383,940
https://api.github.com/repos/huggingface/datasets/issues/7611
https://github.com/huggingface/datasets/issues/7611
7,611
Code example for dataset.add_column() does not reflect correct way to use function
closed
2
2025-06-12T19:42:29
2025-07-17T13:14:18
2025-07-17T13:14:18
shaily99
[]
https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10 The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it.
false
3,141,281,560
https://api.github.com/repos/huggingface/datasets/issues/7610
https://github.com/huggingface/datasets/issues/7610
7,610
i cant confirm email
open
2
2025-06-12T18:58:49
2025-06-27T14:36:47
null
lykamspam
[]
### Describe the bug This is dificult, I cant confirm email because I'm not get any email! I cant post forum because I cant confirm email! I can send help desk because... no exist on web page. paragraph 44 ### Steps to reproduce the bug rthjrtrt ### Expected behavior ewtgfwetgf ### Environment info sdgfswdegfwe
false
3,140,373,128
https://api.github.com/repos/huggingface/datasets/issues/7609
https://github.com/huggingface/datasets/pull/7609
7,609
Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab`
closed
4
2025-06-12T13:47:01
2025-06-16T12:14:10
2025-06-16T12:14:08
qgallouedec
[]
Not 100% about this one, but it seems to be recommended. ``` /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead. ``` Tests pass locally. And the warning is gone with this change. https://peps.python.org/pep-0626/#backwards-compatibility
true
3,137,564,259
https://api.github.com/repos/huggingface/datasets/issues/7608
https://github.com/huggingface/datasets/pull/7608
7,608
Tests typing and fixes for push_to_hub
closed
1
2025-06-11T17:13:52
2025-06-12T21:15:23
2025-06-12T21:15:21
lhoestq
[]
todo: - [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc
true
3,135,722,560
https://api.github.com/repos/huggingface/datasets/issues/7607
https://github.com/huggingface/datasets/issues/7607
7,607
Video and audio decoding with torchcodec
closed
16
2025-06-11T07:02:30
2025-06-19T18:25:49
2025-06-19T18:25:49
TyTodd
[ "enhancement" ]
### Feature request Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video. ### Motivation My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extract the audio tensors directly from MP4 files. Also, I can easily resample video data to whatever fps I like on the fly. I haven't found an easy/efficient way to do this with torchvision. ### Your contribution I’m modifying the Video dataclass to use torchcodec in place of the current backend, starting from a stable commit for a project I’m working on. If it ends up working well, I’m happy to open a PR on main.
false
3,133,848,546
https://api.github.com/repos/huggingface/datasets/issues/7606
https://github.com/huggingface/datasets/pull/7606
7,606
Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset)
closed
1
2025-06-10T14:35:10
2025-06-11T16:47:28
2025-06-11T16:47:25
lhoestq
[]
null
true
3,131,636,882
https://api.github.com/repos/huggingface/datasets/issues/7605
https://github.com/huggingface/datasets/pull/7605
7,605
Make `push_to_hub` atomic (#7600)
closed
4
2025-06-09T22:29:38
2025-06-23T19:32:08
2025-06-23T19:32:08
sharvil
[]
null
true
3,130,837,169
https://api.github.com/repos/huggingface/datasets/issues/7604
https://github.com/huggingface/datasets/pull/7604
7,604
Docs and more methods for IterableDataset: push_to_hub, to_parquet...
closed
1
2025-06-09T16:44:40
2025-06-10T13:15:23
2025-06-10T13:15:21
lhoestq
[]
to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list
true
3,130,394,563
https://api.github.com/repos/huggingface/datasets/issues/7603
https://github.com/huggingface/datasets/pull/7603
7,603
No TF in win tests
closed
1
2025-06-09T13:56:34
2025-06-09T15:33:31
2025-06-09T15:33:30
lhoestq
[]
null
true
3,128,758,924
https://api.github.com/repos/huggingface/datasets/issues/7602
https://github.com/huggingface/datasets/pull/7602
7,602
Enhance error handling and input validation across multiple modules
open
0
2025-06-08T23:01:06
2025-06-08T23:01:06
null
mohiuddin-khan-shiam
[]
This PR improves the robustness and user experience by: 1. **Audio Module**: - Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding 2. **DatasetDict**: - Enhanced key access error messages to show available splits when an invalid key is accessed 3. **NonMutableDict**: - Added input validation for the update() method to ensure proper mapping types 4. **Arrow Reader**: - Improved error messages for small dataset percentage splits with suggestions for alternatives 5. **FaissIndex**: - Strengthened input validation with descriptive error messages - Added proper type checking and shape validation for search queries These changes make the code more maintainable and user-friendly by providing actionable feedback when issues arise.
true
3,127,296,182
https://api.github.com/repos/huggingface/datasets/issues/7600
https://github.com/huggingface/datasets/issues/7600
7,600
`push_to_hub` is not concurrency safe (dataset schema corruption)
closed
4
2025-06-07T17:28:56
2025-07-31T10:00:50
2025-07-31T10:00:50
sharvil
[]
### Describe the bug Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable. Consider this scenario: - we have an Arrow dataset - there are `N` configs of the dataset - there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`) - each process calls `push_to_hub` on their particular config when they're done processing - all calls to `push_to_hub` succeed - the `README.md` now has some configs with `new_col` added and some with `new_col` missing Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising). We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand. Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded. ### Steps to reproduce the bug See above. ### Expected behavior Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.2 - `fsspec` version: 2023.9.0
false
3,125,620,119
https://api.github.com/repos/huggingface/datasets/issues/7599
https://github.com/huggingface/datasets/issues/7599
7,599
My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl
closed
3
2025-06-06T18:59:00
2025-06-16T15:18:00
2025-06-16T15:18:00
JuanCarlosMartinezSevilla
[]
### Describe the bug Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being downloaded when using load_dataset. Can you please help? Thank you in advance. ### Steps to reproduce the bug from datasets import load_dataset ds = load_dataset("PRAIG/SMB") ds = ds["train"] ### Expected behavior It is expected to have all the metadata available in the jsonl file. Fields like: "score_id", "original_width", "original_height", "regions"... among others. ### Environment info datasets==3.6.0, python 3.13.3 (but he problem is already in the huggingface dataset page)
false
3,125,184,457
https://api.github.com/repos/huggingface/datasets/issues/7598
https://github.com/huggingface/datasets/pull/7598
7,598
fix string_to_dict usage for windows
closed
1
2025-06-06T15:54:29
2025-06-06T16:12:22
2025-06-06T16:12:21
lhoestq
[]
null
true
3,123,962,709
https://api.github.com/repos/huggingface/datasets/issues/7597
https://github.com/huggingface/datasets/issues/7597
7,597
Download datasets from a private hub in 2025
closed
2
2025-06-06T07:55:19
2025-06-13T13:46:00
2025-06-13T13:46:00
DanielSchuhmacher
[ "enhancement" ]
### Feature request In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. This issue was raised before here: https://github.com/huggingface/datasets/issues/3679 @juliensimon ### Motivation none ### Your contribution none
false
3,122,595,042
https://api.github.com/repos/huggingface/datasets/issues/7596
https://github.com/huggingface/datasets/pull/7596
7,596
Add albumentations to use dataset
closed
3
2025-06-05T20:39:46
2025-06-17T18:38:08
2025-06-17T14:44:30
ternaus
[]
1. Fixed broken link to the list of transforms in torchvison. 2. Extended section about video image augmentations with an example from Albumentations.
true
3,121,689,436
https://api.github.com/repos/huggingface/datasets/issues/7595
https://github.com/huggingface/datasets/pull/7595
7,595
Add `IterableDataset.push_to_hub()`
closed
1
2025-06-05T15:29:32
2025-06-06T16:12:37
2025-06-06T16:12:36
lhoestq
[]
Basic implementation, which writes one shard per input dataset shard. This is to be improved later. Close https://github.com/huggingface/datasets/issues/5665 PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_hub(...)`
true
3,120,799,626
https://api.github.com/repos/huggingface/datasets/issues/7594
https://github.com/huggingface/datasets/issues/7594
7,594
Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format)
open
8
2025-06-05T11:12:45
2025-06-28T09:03:00
null
avishaiElmakies
[ "enhancement" ]
### Feature request Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl). ### Motivation I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my data and it is too big for me to clean and save on my own hardware). I would like the option to just ignore this column when using `load_dataset`, since i don't need it. I tried to look if this is already possible but couldn't find a solution. if there is I would love some help. If it is not currently possible, I would love this feature ### Your contribution I don't think I can help this time, unfortunately.
false
3,118,812,368
https://api.github.com/repos/huggingface/datasets/issues/7593
https://github.com/huggingface/datasets/pull/7593
7,593
Fix broken link to albumentations
closed
2
2025-06-04T19:00:13
2025-06-05T16:37:02
2025-06-05T16:36:32
ternaus
[]
A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links. In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format. Fix a few typos in the doc as well.
true
3,118,203,880
https://api.github.com/repos/huggingface/datasets/issues/7592
https://github.com/huggingface/datasets/pull/7592
7,592
Remove scripts altogether
closed
6
2025-06-04T15:14:11
2025-08-04T15:17:05
2025-06-09T16:45:27
lhoestq
[]
TODO: - [x] remplace fixtures based on script with no-script fixtures - [x] windaube
true
3,117,816,388
https://api.github.com/repos/huggingface/datasets/issues/7591
https://github.com/huggingface/datasets/issues/7591
7,591
Add num_proc parameter to push_to_hub
open
3
2025-06-04T13:19:15
2025-06-27T06:13:54
null
SwayStar123
[ "enhancement" ]
### Feature request A number of processes parameter to the dataset.push_to_hub method ### Motivation Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
false
3,101,654,892
https://api.github.com/repos/huggingface/datasets/issues/7590
https://github.com/huggingface/datasets/issues/7590
7,590
`Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema.
closed
6
2025-05-29T22:53:36
2025-07-19T22:45:08
2025-07-19T22:45:08
AHS-uni
[]
### Description When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error: ``` ArrowNotImplementedError: Unsupported cast from list<item: struct<id: string, data: string>> to struct using function cast_struct ``` This occurs even when the `features` schema is explicitly provided and the dataset format supports nested structures natively (e.g., JSON, JSONL). --- ### Minimal Reproduction [Colab Link.](https://colab.research.google.com/drive/1FZPQy6TP3jVd4B3mYKyfQaWNuOAvljUq?usp=sharing) #### Dataset ```python data = [ { "list": [ {"id": "example1", "data": "text"}, ] }, ] ``` #### Schema ```python from datasets import Features, Sequence, Value item = Features({ "id": Value("string"), "data": Value("string"), }) features = Features({ "list": Sequence(item), }) ``` --- ### Tested File Formats The same schema was tested across different formats: | Format | Method | Result | | --------- | --------------------------- | ------------------- | | JSONL | `load_dataset("json", ...)` | Arrow cast error | | JSON | `load_dataset("json", ...)` | Arrow cast error | | In-memory | `Dataset.from_list(...)` | Works as expected | The issue seems not to be in the schema or the data, but in how `load_dataset()` handles the `Sequence(Features(...))` pattern when parsing from files (specifically JSON and JSONL). --- ### Expected Behavior If `features` is explicitly defined as: ```python Features({"list": Sequence(Features({...}))}) ``` Then the data should load correctly across all backends β€” including from JSON and JSONL β€” without any Arrow casting errors. This works correctly when loading from memory via `Dataset.from_list`. --- ### Environment * `datasets`: 3.6.0 * `pyarrow`: 20.0.0 * Python: 3.12.10 * OS: Ubuntu 24.04.2 LTS * Notebook: \[Colab test notebook available] ---
false
3,101,119,704
https://api.github.com/repos/huggingface/datasets/issues/7589
https://github.com/huggingface/datasets/pull/7589
7,589
feat: use content defined chunking
open
3
2025-05-29T18:19:41
2025-07-25T11:56:51
null
kszucs
[]
Use content defined chunking by default when writing parquet files. - [x] set the parameters in `io.parquet.ParquetDatasetReader` - [x] set the parameters in `arrow_writer.ParquetWriter` It requires a new pyarrow pin ">=21.0.0" which is released now.
true
3,094,012,025
https://api.github.com/repos/huggingface/datasets/issues/7588
https://github.com/huggingface/datasets/issues/7588
7,588
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
closed
5
2025-05-27T13:46:05
2025-05-30T13:22:52
2025-05-30T01:26:30
wkambale
[]
### Describe the bug I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate). now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up ### Steps to reproduce the bug Imports: ```bash !pip install datasets huggingface_hub fsspec ``` Python code: ```python from datasets import load_dataset HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus" # Load the dataset try: if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME": raise ValueError( "Please provide a valid Hugging Face dataset name." ) dataset = load_dataset(HF_DATASET_NAME) # Omitted code as the error happens on the line above except ValueError as ve: print(f"Configuration Error: {ve}") raise except Exception as e: print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}") raise e ``` now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps ### Expected behavior loading the dataset successfully and perform splits (train, test, validation) ### Environment info from the imports, i do not install specific versions of these libraries, so the latest or available version is installed * `datasets` version: latest * `Platform`: Google Colab * `Hardware`: NVIDIA A100 GPU * `Python` version: latest * `huggingface_hub` version: latest * `fsspec` version: latest
false
3,091,834,987
https://api.github.com/repos/huggingface/datasets/issues/7587
https://github.com/huggingface/datasets/pull/7587
7,587
load_dataset splits typing
closed
1
2025-05-26T18:28:40
2025-05-26T18:31:10
2025-05-26T18:29:57
lhoestq
[]
close https://github.com/huggingface/datasets/issues/7583
true
3,091,320,431
https://api.github.com/repos/huggingface/datasets/issues/7586
https://github.com/huggingface/datasets/issues/7586
7,586
help is appreciated
open
1
2025-05-26T14:00:42
2025-05-26T18:21:57
null
rajasekarnp1
[ "enhancement" ]
### Feature request https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main ### Motivation ai model develpment and audio ### Your contribution ai model develpment and audio
false
3,091,227,921
https://api.github.com/repos/huggingface/datasets/issues/7585
https://github.com/huggingface/datasets/pull/7585
7,585
Avoid multiple default config names
closed
1
2025-05-26T13:27:59
2025-06-05T12:41:54
2025-06-05T12:41:52
albertvillanova
[]
Fix duplicating default config names. Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default. Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`: https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/arrow_dataset.py#L5757 https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/utils/metadata.py#L186-L188
true
3,090,255,023
https://api.github.com/repos/huggingface/datasets/issues/7584
https://github.com/huggingface/datasets/issues/7584
7,584
Add LMDB format support
open
1
2025-05-26T07:10:13
2025-05-26T18:23:37
null
trotsky1997
[ "enhancement" ]
### Feature request Add LMDB format support for large memory-mapping files ### Motivation Add LMDB format support for large memory-mapping files ### Your contribution I'm trying to add it
false
3,088,987,757
https://api.github.com/repos/huggingface/datasets/issues/7583
https://github.com/huggingface/datasets/issues/7583
7,583
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
closed
0
2025-05-25T02:33:18
2025-05-26T18:29:58
2025-05-26T18:29:58
hierr
[]
### Describe the bug The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime. ### Steps to reproduce the bug 1. Use load_dataset with multiple splits e.g.: ``` from datasets import load_dataset ds_train, ds_val, ds_test = load_dataset( "Silly-Machine/TuPyE-Dataset", "binary", split=["train[:75%]", "train[75%:]", "test"] ) ``` 2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"` ### Expected behavior The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.7 - `huggingface_hub` version: 0.32.0 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
false
3,083,515,643
https://api.github.com/repos/huggingface/datasets/issues/7582
https://github.com/huggingface/datasets/pull/7582
7,582
fix: Add embed_storage in Pdf feature
closed
1
2025-05-22T14:06:29
2025-05-22T14:17:38
2025-05-22T14:17:36
AndreaFrancis
[]
Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image)
true
3,083,080,413
https://api.github.com/repos/huggingface/datasets/issues/7581
https://github.com/huggingface/datasets/pull/7581
7,581
Add missing property on `RepeatExamplesIterable`
closed
0
2025-05-22T11:41:07
2025-06-05T12:41:30
2025-06-05T12:41:29
SilvanCodes
[]
Fixes #7561
true
3,082,993,027
https://api.github.com/repos/huggingface/datasets/issues/7580
https://github.com/huggingface/datasets/issues/7580
7,580
Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False.
open
1
2025-05-22T11:08:16
2025-05-26T18:40:31
null
s3pi
[]
### Describe the bug When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call. This behavior leads to unnecessary bandwidth usage and longer download times, especially for large datasets, even if the user only intends to use a single split. ### Steps to reproduce the bug dataset_name = "skbose/indian-english-nptel-v0" dataset = load_dataset(dataset_name, token=hf_token, split="test") ### Expected behavior Optimize the download logic so that only the required split is downloaded when streaming=False when a specific split is provided. ### Environment info Dataset: skbose/indian-english-nptel-v0 Platform: M1 Apple Silicon Python verison: 3.12.9 datasets>=3.5.0
false
3,081,849,022
https://api.github.com/repos/huggingface/datasets/issues/7579
https://github.com/huggingface/datasets/pull/7579
7,579
Fix typos in PDF and Video documentation
closed
1
2025-05-22T02:27:40
2025-05-22T12:53:49
2025-05-22T12:53:47
AndreaFrancis
[]
null
true
3,080,833,740
https://api.github.com/repos/huggingface/datasets/issues/7577
https://github.com/huggingface/datasets/issues/7577
7,577
arrow_schema is not compatible with list
closed
3
2025-05-21T16:37:01
2025-05-26T18:49:51
2025-05-26T18:32:55
jonathanshen-upwork
[]
### Describe the bug ``` import datasets f = datasets.Features({'x': list[datasets.Value(dtype='int32')]}) f.arrow_schema Traceback (most recent call last): File "datasets/features/features.py", line 1826, in arrow_schema return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)}) ^^^^^^^^^ File "datasets/features/features.py", line 1815, in type return get_nested_type(self) ^^^^^^^^^^^^^^^^^^^^^ File "datasets/features/features.py", line 1252, in get_nested_type return pa.struct( ^^^^^^^^^^ File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type TypeError: DataType expected, got <class 'list'> ``` The following works ``` f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))}) ``` ### Expected behavior according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
false
3,080,450,538
https://api.github.com/repos/huggingface/datasets/issues/7576
https://github.com/huggingface/datasets/pull/7576
7,576
Fix regex library warnings
closed
1
2025-05-21T14:31:58
2025-06-05T13:35:16
2025-06-05T12:37:55
emmanuel-ferdman
[]
# PR Summary This small PR resolves the regex library warnings showing starting Python3.11: ```python DeprecationWarning: 'count' is passed as positional argument ```
true
3,080,228,718
https://api.github.com/repos/huggingface/datasets/issues/7575
https://github.com/huggingface/datasets/pull/7575
7,575
[MINOR:TYPO] Update save_to_disk docstring
closed
0
2025-05-21T13:22:24
2025-06-05T12:39:13
2025-06-05T12:39:13
cakiki
[]
r/hub/filesystem in save_to_disk
true
3,079,641,072
https://api.github.com/repos/huggingface/datasets/issues/7574
https://github.com/huggingface/datasets/issues/7574
7,574
Missing multilingual directions in IWSLT2017 dataset's processing script
open
2
2025-05-21T09:53:17
2025-05-26T18:36:38
null
andy-joy-25
[]
### Describe the bug Hi, Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`. Best Regards, Anand ### Steps to reproduce the bug Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`. ### Expected behavior The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use. I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.30.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
false
3,076,415,382
https://api.github.com/repos/huggingface/datasets/issues/7573
https://github.com/huggingface/datasets/issues/7573
7,573
No Samsum dataset
closed
4
2025-05-20T09:54:35
2025-07-21T18:34:34
2025-06-18T12:52:23
IgorKasianenko
[]
### Describe the bug https://huggingface.co/datasets/Samsung/samsum dataset not found error 404 Originated from https://github.com/meta-llama/llama-cookbook/issues/948 ### Steps to reproduce the bug go to website https://huggingface.co/datasets/Samsung/samsum see the error also downloading it with python throws ``` Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found) ``` ### Expected behavior Dataset exists ### Environment info ``` - `datasets` version: 3.2.0 - Platform: macOS-15.4.1-arm64-arm-64bit - Python version: 3.12.2 - `huggingface_hub` version: 0.26.5 - PyArrow version: 16.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0 ```
false
3,074,529,251
https://api.github.com/repos/huggingface/datasets/issues/7572
https://github.com/huggingface/datasets/pull/7572
7,572
Fixed typos
closed
1
2025-05-19T17:16:59
2025-06-05T12:25:42
2025-06-05T12:25:41
TopCoder2K
[]
More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781).
true
3,074,116,942
https://api.github.com/repos/huggingface/datasets/issues/7571
https://github.com/huggingface/datasets/pull/7571
7,571
fix string_to_dict test
closed
1
2025-05-19T14:49:23
2025-05-19T14:52:24
2025-05-19T14:49:28
lhoestq
[]
null
true
3,065,966,529
https://api.github.com/repos/huggingface/datasets/issues/7570
https://github.com/huggingface/datasets/issues/7570
7,570
Dataset lib seems to broke after fssec lib update
closed
3
2025-05-15T11:45:06
2025-06-13T00:44:27
2025-06-13T00:44:27
sleepingcat4
[]
### Describe the bug I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec` ### Steps to reproduce the bug from datasets import load_dataset def download_hf(): dataset_name = input("Enter the dataset name: ") subset_name = input("Enter subset name: ") ds = load_dataset(dataset_name, name=subset_name) for split in ds: ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False) download_hf() ### Expected behavior ``` Downloading readme: 100%  1.55k/1.55k [00:00<00:00, 121kB/s] Downloading data files: 100%  1/1 [00:00<00:00,  2.06it/s] Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 54.2k/54.2k [00:00<00:00, 121kB/s] Extracting data files: 100%  1/1 [00:00<00:00, 35.17it/s] Generating test split:   140/0 [00:00<00:00, 2628.62 examples/s] --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) [<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>() 8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False) 9 ---> 10 download_hf() 2 frames [/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1171 is_local = not is_remote_filesystem(self._fs) 1172 if not is_local: -> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") 1174 if not os.path.exists(self._output_dir): 1175 raise FileNotFoundError( NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` OR ``` Traceback (most recent call last): File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module> download_hf() File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf ds = load_dataset(dataset_name, name=subset_name) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset builder_instance = load_dataset_builder( File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder dataset_module = dataset_module_factory( File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory raise e1 from None File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed. ``` ### Environment info colab and 3.10 local system
false
3,061,234,054
https://api.github.com/repos/huggingface/datasets/issues/7569
https://github.com/huggingface/datasets/issues/7569
7,569
Dataset creation is broken if nesting a dict inside a dict inside a list
open
2
2025-05-13T21:06:45
2025-05-20T19:25:15
null
TimSchneider42
[]
### Describe the bug Hey, I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details. Best, Tim ### Steps to reproduce the bug Runing this code: ```python from datasets import Dataset, Features, Sequence, Value def generator(): yield { "a": [{"b": {"c": 0}}], } features = Features( { "a": Sequence( feature={ "b": { "c": Value("int32"), }, }, length=1, ) } ) dataset = Dataset.from_generator(generator, features=features) ``` leads to ``` Generating train split: 1 examples [00:00, 540.85 examples/s] Traceback (most recent call last): File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single num_examples, num_bytes = writer.finalize() ^^^^^^^^^^^^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize self.write_examples_on_file() File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch pa_table = pa.Table.from_arrays(arrays, schema=schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast return call_function("cast", [arr], options, memory_pool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/test/tools/hf_test2.py", line 23, in <module> dataset = Dataset.from_generator(generator, features=features) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator ).read() ^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read self.builder.download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset Process finished with exit code 1 ``` ### Expected behavior I expected this code not to lead to an error. I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added): ```python def get_nested_type(schema: FeatureType, level=0) -> pa.DataType: """ get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of generate_from_arrow_type(). It performs double-duty as the implementation of Features.type and handles the conversion of datasets.Feature->pa.struct """ # Nested structures: we allow dict, list/tuples, sequences if isinstance(schema, Features): return pa.struct( {key: get_nested_type(schema[key], level = level + 1) for key in schema} ) # Features is subclass of dict, and dict order is deterministic since Python 3.6 elif isinstance(schema, dict): return pa.struct( {key: get_nested_type(schema[key], level = level + 1) for key in schema} ) # however don't sort on struct types since the order matters elif isinstance(schema, (list, tuple)): if len(schema) != 1: raise ValueError("When defining list feature, you should just provide one example of the inner type") value_type = get_nested_type(schema[0], level = level + 1) return pa.list_(value_type) elif isinstance(schema, LargeList): value_type = get_nested_type(schema.feature, level = level + 1) return pa.large_list(value_type) elif isinstance(schema, Sequence): value_type = get_nested_type(schema.feature, level = level + 1) # We allow to reverse list of dict => dict of list for compatibility with tfds if isinstance(schema.feature, dict) and level == 1: data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type}) else: data_type = pa.list_(value_type, schema.length) return data_type # Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods) return schema() ``` I have honestly no idea what I am doing here, so this might produce other issues for different inputs. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 Also tested it with 3.5.0, same result.
false
3,060,515,257
https://api.github.com/repos/huggingface/datasets/issues/7568
https://github.com/huggingface/datasets/issues/7568
7,568
`IterableDatasetDict.map()` call removes `column_names` (in fact info.features)
open
6
2025-05-13T15:45:42
2025-06-30T09:33:47
null
mombip
[]
When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relies on `info.features`, it ends up broken (`None`). **Reproduction** 1. Define an IterableDatasetDict with a non-None features schema. 2. my_iterable_dataset_dict contains "text" column. 3. Call: ```Python new_dict = my_iterable_dataset_dict.map( function=my_fn, with_indices=False, batched=True, batch_size=16, ) ``` 4. Observe ```Python new_dict["train"].info.features # {'text': Value(dtype='string', id=None)} new_dict["train"].column_names # ['text'] ``` 5. Call: ```Python new_dict = my_iterable_dataset_dict.map( function=my_fn, with_indices=False, batched=True, batch_size=16, remove_columns=["foo"] ) ``` 6. Observe: ```Python new_dict["train"].info.features # β†’ None new_dict["train"].column_names # β†’ None ``` 5. Internally, in dataset_dict.py this loop omits features ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/dataset_dict.py#L2047C5-L2056C14)): ```Python for split, dataset in self.items(): dataset_dict[split] = dataset.map( function=function, with_indices=with_indices, input_columns=input_columns, batched=batched, batch_size=batch_size, drop_last_batch=drop_last_batch, remove_columns=remove_columns, fn_kwargs=fn_kwargs, # features omitted β†’ defaults to None ) ``` 7. Then inside IterableDataset.map() ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2619C1-L2622C37)) correct `info.features` is replaced by features which is None: ```Python info = self.info.copy() info.features = features # features is None here return IterableDataset(..., info=info, ...) ``` **Suggestion** It looks like this replacement was added intentionally but maybe should be done only if `features` is `not None`. **Workarround:** `SFTTrainer` calls `dataset.map()` several times and then fails on `NoneType` when iterating `dataset.column_names`. I decided to write this patch - works form me. ```python def patch_iterable_dataset_map(): _orig_map = IterableDataset.map def _patched_map(self, *args, **kwargs): if "features" not in kwargs or kwargs["features"] is None: kwargs["features"] = self.info.features return _orig_map(self, *args, **kwargs) IterableDataset.map = _patched_map ```
false
3,058,308,538
https://api.github.com/repos/huggingface/datasets/issues/7567
https://github.com/huggingface/datasets/issues/7567
7,567
interleave_datasets seed with multiple workers
open
7
2025-05-12T22:38:27
2025-06-29T06:53:59
null
jonathanasdf
[]
### Describe the bug Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers. Should the seed be modulated with the worker id? ### Steps to reproduce the bug See above ### Expected behavior See above ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.4.1-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
false
3,055,279,344
https://api.github.com/repos/huggingface/datasets/issues/7566
https://github.com/huggingface/datasets/issues/7566
7,566
terminate called without an active exception; Aborted (core dumped)
open
4
2025-05-11T23:05:54
2025-06-23T17:56:02
null
alexey-milovidov
[]
### Describe the bug I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort. ### Steps to reproduce the bug 1. `pip install datasets` 2. ``` $ cat main.py #!/usr/bin/env python3 from datasets import load_dataset dataset = load_dataset('HuggingFaceFW/fineweb', split='train', streaming=True) print(next(iter(dataset))) ``` 3. `chmod +x main.py` ``` $ ./main.py README.md: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 43.1k/43.1k [00:00<00:00, 7.04MB/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25868/25868 [00:05<00:00, 4859.26it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25868/25868 [00:00<00:00, 54773.56it/s] {'text': "How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: β€œFound leg. Seriously.”\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\nβ€œWhen weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\nβ€œCLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\nThe byline? Robert Ray.\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\nΒ© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%20jwashington@ap.org/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717} terminate called without an active exception Aborted (core dumped) ``` ### Expected behavior I'm not a proficient Python user, so it might be my own error, but even in that case, the error message should be better. ### Environment info `Successfully installed datasets-3.6.0 dill-0.3.8 hf-xet-1.1.0 huggingface-hub-0.31.1 multiprocess-0.70.16 requests-2.32.3 xxhash-3.5.0` ``` $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS" ```
false
3,051,731,207
https://api.github.com/repos/huggingface/datasets/issues/7565
https://github.com/huggingface/datasets/pull/7565
7,565
add check if repo exists for dataset uploading
open
2
2025-05-09T10:27:00
2025-06-09T14:39:23
null
Samoed
[]
Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error: `Too many requests for https://huggingface.co/datasets/repo/create`. It seems that this issue occurs because the dataset tries to recreate itself every time a split is uploaded. To resolve this, I've added a check to ensure that if the dataset already exists, it won't attempt to recreate it.
true
3,049,275,226
https://api.github.com/repos/huggingface/datasets/issues/7564
https://github.com/huggingface/datasets/pull/7564
7,564
Implementation of iteration over values of a column in an IterableDataset object
closed
5
2025-05-08T14:59:22
2025-05-19T12:15:02
2025-05-19T12:15:02
TopCoder2K
[]
Refers to [this issue](https://github.com/huggingface/datasets/issues/7381).
true
3,046,351,253
https://api.github.com/repos/huggingface/datasets/issues/7563
https://github.com/huggingface/datasets/pull/7563
7,563
set dev version
closed
1
2025-05-07T15:18:29
2025-05-07T15:21:05
2025-05-07T15:18:36
lhoestq
[]
null
true
3,046,339,430
https://api.github.com/repos/huggingface/datasets/issues/7562
https://github.com/huggingface/datasets/pull/7562
7,562
release: 3.6.0
closed
1
2025-05-07T15:15:13
2025-05-07T15:17:46
2025-05-07T15:15:21
lhoestq
[]
null
true
3,046,302,653
https://api.github.com/repos/huggingface/datasets/issues/7561
https://github.com/huggingface/datasets/issues/7561
7,561
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
closed
0
2025-05-07T15:05:42
2025-06-05T12:41:30
2025-06-05T12:41:30
cyanic-selkie
[]
### Describe the bug When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR. ### Steps to reproduce the bug 1. Create an `IterableDataset`. 2. Call `.repeat(None)` on it. 3. Wrap it in a pytorch `DataLoader` 4. Iterate over it. ### Expected behavior This should work normally. ### Environment info datasets: 3.5.0
false
3,046,265,500
https://api.github.com/repos/huggingface/datasets/issues/7560
https://github.com/huggingface/datasets/pull/7560
7,560
fix decoding tests
closed
1
2025-05-07T14:56:14
2025-05-07T14:59:02
2025-05-07T14:56:20
lhoestq
[]
null
true
3,046,177,078
https://api.github.com/repos/huggingface/datasets/issues/7559
https://github.com/huggingface/datasets/pull/7559
7,559
fix aiohttp import
closed
1
2025-05-07T14:31:32
2025-05-07T14:34:34
2025-05-07T14:31:38
lhoestq
[]
null
true
3,046,066,628
https://api.github.com/repos/huggingface/datasets/issues/7558
https://github.com/huggingface/datasets/pull/7558
7,558
fix regression
closed
1
2025-05-07T13:56:03
2025-05-07T13:58:52
2025-05-07T13:56:18
lhoestq
[]
reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition) wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead
true
3,045,962,076
https://api.github.com/repos/huggingface/datasets/issues/7557
https://github.com/huggingface/datasets/pull/7557
7,557
check for empty _formatting
closed
1
2025-05-07T13:22:37
2025-05-07T13:57:12
2025-05-07T13:57:12
winglian
[]
Fixes a regression from #7553 breaking shuffling of iterable datasets <img width="884" alt="Screenshot 2025-05-07 at 9 16 52β€―AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
true
3,043,615,210
https://api.github.com/repos/huggingface/datasets/issues/7556
https://github.com/huggingface/datasets/pull/7556
7,556
Add `--merge-pull-request` option for `convert_to_parquet`
closed
2
2025-05-06T18:05:05
2025-07-18T19:09:10
2025-07-18T19:09:10
klamike
[]
Closes #7527 Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details.
true
3,043,089,844
https://api.github.com/repos/huggingface/datasets/issues/7554
https://github.com/huggingface/datasets/issues/7554
7,554
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
closed
2
2025-05-06T14:43:38
2025-05-07T14:53:45
2025-05-07T14:53:44
sei-eschwartz
[]
### Describe the bug `datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this. ### Steps to reproduce the bug See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing) Or: ```python from datasets import load_dataset dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True) ``` ### Expected behavior I expected only the `test_synth` split to be downloaded and processed. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.12 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
false
3,042,953,907
https://api.github.com/repos/huggingface/datasets/issues/7553
https://github.com/huggingface/datasets/pull/7553
7,553
Rebatch arrow iterables before formatted iterable
closed
2
2025-05-06T13:59:58
2025-05-07T13:17:41
2025-05-06T14:03:42
lhoestq
[]
close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475
true
3,040,258,084
https://api.github.com/repos/huggingface/datasets/issues/7552
https://github.com/huggingface/datasets/pull/7552
7,552
Enable xet in push to hub
closed
1
2025-05-05T17:02:09
2025-05-06T12:42:51
2025-05-06T12:42:48
lhoestq
[]
follows https://github.com/huggingface/huggingface_hub/pull/3035 related to https://github.com/huggingface/datasets/issues/7526
true
3,038,114,928
https://api.github.com/repos/huggingface/datasets/issues/7551
https://github.com/huggingface/datasets/issues/7551
7,551
Issue with offline mode and partial dataset cached
open
4
2025-05-04T16:49:37
2025-05-13T03:18:43
null
nrv
[]
### Describe the bug Hi, a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards ### Steps to reproduce the bug ```python import os # os.environ["HF_HUB_OFFLINE"] = "1" os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx" import datasets dataset_name = "uonlp/CulturaX" data_files = "fr/fr_part_00038.parquet" ds = datasets.load_dataset(dataset_name, split='train', data_files=data_files) print(f"Dataset loaded : {ds}") ``` Once the file has been cached, I rerun with the HF_HUB_OFFLINE activated an get this error : ``` ValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e' Available configs in the cache: ['default-2935e8cdcc21c613'] ``` ### Expected behavior Should be able to access the previously cached files ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.4.0-215-generic-x86_64-with-glibc2.31 - Python version: 3.12.0 - `huggingface_hub` version: 0.27.0 - PyArrow version: 19.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
false
3,037,017,367
https://api.github.com/repos/huggingface/datasets/issues/7550
https://github.com/huggingface/datasets/pull/7550
7,550
disable aiohttp depend for python 3.13t free-threading compat
closed
0
2025-05-03T00:28:18
2025-05-03T00:28:24
2025-05-03T00:28:24
Qubitium
[]
null
true
3,036,272,015
https://api.github.com/repos/huggingface/datasets/issues/7549
https://github.com/huggingface/datasets/issues/7549
7,549
TypeError: Couldn't cast array of type string to null on webdataset format dataset
open
1
2025-05-02T15:18:07
2025-05-02T15:37:05
null
narugo1992
[]
### Describe the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` got ``` File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 255, in pyarrow.lib.array File "pyarrow/array.pxi", line 117, in pyarrow.lib._handle_arrow_array_protocol File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 258, in __arrow_array__ out = cast_array_to_feature( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2006, in cast_array_to_feature arrays = [ File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2007, in <listcomp> _c(array.field(name) if name in array_fields else null_array, subfeature) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2066, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2103, in cast_array_to_feature return array_cast( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1949, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type string to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/load.py", line 2084, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` `datasets==3.5.1` whats wrong its inner json structure is like ```yaml features: - name: "image" dtype: "image" - name: "json.id" dtype: "string" - name: "json.width" dtype: "int32" - name: "json.height" dtype: "int32" - name: "json.rating" sequence: dtype: "string" - name: "json.general_tags" sequence: dtype: "string" - name: "json.character_tags" sequence: dtype: "string" ``` i'm 100% sure all the jsons satisfies the abovementioned format. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` ### Expected behavior load the dataset successfully, with the abovementioned json format and webp images ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 3.5.1 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - `huggingface_hub` version: 0.30.2 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
false
3,035,568,851
https://api.github.com/repos/huggingface/datasets/issues/7548
https://github.com/huggingface/datasets/issues/7548
7,548
Python 3.13t (free threads) Compat
open
7
2025-05-02T09:20:09
2025-05-12T15:11:32
null
Qubitium
[]
### Describe the bug Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python. The `free threading` support issue in `aiothttp` is active since August 2024! Ouch. https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784 `pip install dataset` ```bash (vm313t) root@gpu-base:~/GPTQModel# pip install datasets WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/datasets/ Collecting datasets Using cached datasets-3.5.1-py3-none-any.whl.metadata (19 kB) Requirement already satisfied: filelock in /root/vm313t/lib/python3.13t/site-packages (from datasets) (3.18.0) Requirement already satisfied: numpy>=1.17 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.2.5) Collecting pyarrow>=15.0.0 (from datasets) Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Collecting dill<0.3.9,>=0.3.0 (from datasets) Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB) Collecting pandas (from datasets) Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB) Requirement already satisfied: requests>=2.32.2 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.32.3) Requirement already satisfied: tqdm>=4.66.3 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (4.67.1) Collecting xxhash (from datasets) Using cached xxhash-3.5.0-cp313-cp313t-linux_x86_64.whl Collecting multiprocess<0.70.17 (from datasets) Using cached multiprocess-0.70.16-py312-none-any.whl.metadata (7.2 kB) Collecting fsspec<=2025.3.0,>=2023.1.0 (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets) Using cached fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB) Collecting aiohttp (from datasets) Using cached aiohttp-3.11.18.tar.gz (7.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: huggingface-hub>=0.24.0 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (0.30.2) Requirement already satisfied: packaging in /root/vm313t/lib/python3.13t/site-packages (from datasets) (25.0) Requirement already satisfied: pyyaml>=5.1 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (6.0.2) Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->datasets) Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB) Collecting aiosignal>=1.1.2 (from aiohttp->datasets) Using cached aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB) Collecting attrs>=17.3.0 (from aiohttp->datasets) Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB) Collecting frozenlist>=1.1.1 (from aiohttp->datasets) Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB) Collecting multidict<7.0,>=4.5 (from aiohttp->datasets) Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.3 kB) Collecting propcache>=0.2.0 (from aiohttp->datasets) Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB) Collecting yarl<2.0,>=1.17.0 (from aiohttp->datasets) Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (72 kB) Requirement already satisfied: idna>=2.0 in /root/vm313t/lib/python3.13t/site-packages (from yarl<2.0,>=1.17.0->aiohttp->datasets) (3.10) Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/vm313t/lib/python3.13t/site-packages (from huggingface-hub>=0.24.0->datasets) (4.13.2) Requirement already satisfied: charset-normalizer<4,>=2 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (3.4.1) Requirement already satisfied: urllib3<3,>=1.21.1 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2.4.0) Requirement already satisfied: certifi>=2017.4.17 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2025.4.26) Collecting python-dateutil>=2.8.2 (from pandas->datasets) Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) Collecting pytz>=2020.1 (from pandas->datasets) Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB) Collecting tzdata>=2022.7 (from pandas->datasets) Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB) Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas->datasets) Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Using cached datasets-3.5.1-py3-none-any.whl (491 kB) Using cached dill-0.3.8-py3-none-any.whl (116 kB) Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB) Using cached multiprocess-0.70.16-py312-none-any.whl (146 kB) Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (220 kB) Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (404 kB) Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB) Using cached aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB) Using cached attrs-25.3.0-py3-none-any.whl (63 kB) Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (385 kB) Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (282 kB) Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl (42.2 MB) Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.9 MB) Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB) Using cached six-1.17.0-py2.py3-none-any.whl (11 kB) Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB) Building wheels for collected packages: aiohttp Building wheel for aiohttp (pyproject.toml) ... error error: subprocess-exited-with-error Γ— Building wheel for aiohttp (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─> [156 lines of output] ********************* * Accelerated build * ********************* /tmp/pip-build-env-wjqi8_7w/overlay/lib/python3.13t/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated. !! ******************************************************************************** Please consider removing the following classifiers in favor of a SPDX license expression: License :: OSI Approved :: Apache Software License See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! self._finalize_license_expression() running bdist_wheel running build running build_py creating build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/compression_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-313t/aiohttp creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/models.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_c.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_py.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket running egg_info writing aiohttp.egg-info/PKG-INFO writing dependency_links to aiohttp.egg-info/dependency_links.txt writing requirements to aiohttp.egg-info/requires.txt writing top-level names to aiohttp.egg-info/top_level.txt reading manifest file 'aiohttp.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'aiohttp' anywhere in distribution warning: no files found matching '*.pyi' anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.lib' found anywhere in distribution warning: no previously-included files matching '*.dll' found anywhere in distribution warning: no previously-included files matching '*.a' found anywhere in distribution warning: no previously-included files matching '*.obj' found anywhere in distribution warning: no previously-included files found matching 'aiohttp/*.html' no previously-included directories found matching 'docs/_build' adding license file 'LICENSE.txt' writing manifest file 'aiohttp.egg-info/SOURCES.txt' copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-313t/aiohttp creating build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/_websocket/mask.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/mask.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_c.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/mask.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/mask.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/reader_c.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash running build_ext building 'aiohttp._websocket.mask' extension creating build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fPIC -I/root/vm313t/include -I/usr/include/python3.13t -c aiohttp/_websocket/mask.c -o build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket/mask.o aiohttp/_websocket/mask.c:1864:80: error: unknown type name β€˜__pyx_vectorcallfunc’; did you mean β€˜vectorcallfunc’? 1864 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c: In function β€˜__pyx_f_7aiohttp_10_websocket_4mask__websocket_mask_cython’: aiohttp/_websocket/mask.c:2905:3: warning: β€˜Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations] 2905 | if (unlikely(__pyx_assertions_enabled())) { | ^~ In file included from /usr/include/python3.13t/Python.h:76, from aiohttp/_websocket/mask.c:16: /usr/include/python3.13t/cpython/pydebug.h:13:37: note: declared here 13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag; | ^~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c: At top level: aiohttp/_websocket/mask.c:4846:69: error: unknown type name β€˜__pyx_vectorcallfunc’; did you mean β€˜vectorcallfunc’? 4846 | static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c:4891:80: error: unknown type name β€˜__pyx_vectorcallfunc’; did you mean β€˜vectorcallfunc’? 4891 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c: In function β€˜__Pyx_CyFunction_CallAsMethod’: aiohttp/_websocket/mask.c:5580:6: error: unknown type name β€˜__pyx_vectorcallfunc’; did you mean β€˜vectorcallfunc’? 5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c:1954:45: warning: initialization of β€˜int’ from β€˜vectorcallfunc’ {aka β€˜struct _object * (*)(struct _object *, struct _object * const*, long unsigned int, struct _object *)’} makes integer from pointer without a cast [-Wint-conversion] 1954 | #define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) | ^ aiohttp/_websocket/mask.c:5580:32: note: in expansion of macro β€˜__Pyx_CyFunction_func_vectorcall’ 5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c:5583:16: warning: implicit declaration of function β€˜__Pyx_PyVectorcall_FastCallDict’ [-Wimplicit-function-declaration] 5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c:5583:16: warning: returning β€˜int’ from a function with return type β€˜PyObject *’ {aka β€˜struct _object *’} makes pointer from integer without a cast [-Wint-conversion] 5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Failed to build installable wheels for some pyproject.toml based projects (aiohttp) ``` ### Steps to reproduce the bug See above ### Expected behavior Install ### Environment info Ubuntu 24.04
false
3,034,830,291
https://api.github.com/repos/huggingface/datasets/issues/7547
https://github.com/huggingface/datasets/pull/7547
7,547
Avoid global umask for setting file mode.
closed
1
2025-05-01T22:24:24
2025-05-06T13:05:00
2025-05-06T13:05:00
ryan-clancy
[]
This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the `temp_file` instead. This fixes https://github.com/huggingface/datasets/issues/7536.
true
3,034,018,298
https://api.github.com/repos/huggingface/datasets/issues/7546
https://github.com/huggingface/datasets/issues/7546
7,546
Large memory use when loading large datasets to a ZFS pool
closed
4
2025-05-01T14:43:47
2025-05-13T13:30:09
2025-05-13T13:29:53
FredHaa
[]
### Describe the bug When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets. ### Steps to reproduce the bug `uv run --with datasets==3.5.1 python` ```python from datasets import load_dataset load_dataset('MLCommons/peoples_speech', 'clean') load_dataset('mozilla-foundation/common_voice_17_0', 'en') ``` ### Expected behavior I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded. ### Environment info I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
false
3,031,617,547
https://api.github.com/repos/huggingface/datasets/issues/7545
https://github.com/huggingface/datasets/issues/7545
7,545
Networked Pull Through Cache
open
0
2025-04-30T15:16:33
2025-04-30T15:16:33
null
wrmedford
[ "enhancement" ]
### Feature request Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service. Enable a three-tier cache lookup for datasets: 1. Local on-disk cache 2. Configurable network cache proxy 3. Official Hugging Face Hub ### Motivation - Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets. - Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs. - Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency. - Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/ ### Your contribution I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype. I have limited bandwidth so I would be looking for collaborators if anyone else is interested.
false
3,027,024,285
https://api.github.com/repos/huggingface/datasets/issues/7544
https://github.com/huggingface/datasets/pull/7544
7,544
Add try_original_type to DatasetDict.map
closed
3
2025-04-29T04:39:44
2025-05-05T14:42:49
2025-05-05T14:42:49
yoshitomo-matsubara
[]
This PR resolves #7472 for DatasetDict The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type` Cc: @lhoestq
true
3,026,867,706
https://api.github.com/repos/huggingface/datasets/issues/7543
https://github.com/huggingface/datasets/issues/7543
7,543
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.οΌ‰
closed
0
2025-04-29T03:04:59
2025-04-30T02:22:17
2025-04-30T02:22:17
jxma20
[]
### Describe the bug ## bug When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory. However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leakβ€”meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache. ## bug solved After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges. So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue. ### Steps to reproduce the bug my code `train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])` ### Expected behavior Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner. ### Environment info python: 3.10.15 datasets: 3.5.0
false
3,025,054,630
https://api.github.com/repos/huggingface/datasets/issues/7542
https://github.com/huggingface/datasets/pull/7542
7,542
set dev version
closed
1
2025-04-28T14:03:48
2025-04-28T14:08:37
2025-04-28T14:04:00
lhoestq
[]
null
true
3,025,045,919
https://api.github.com/repos/huggingface/datasets/issues/7541
https://github.com/huggingface/datasets/pull/7541
7,541
release: 3.5.1
closed
1
2025-04-28T14:00:59
2025-04-28T14:03:38
2025-04-28T14:01:54
lhoestq
[]
null
true
3,024,862,966
https://api.github.com/repos/huggingface/datasets/issues/7540
https://github.com/huggingface/datasets/pull/7540
7,540
support pyarrow 20
closed
1
2025-04-28T13:01:11
2025-04-28T13:23:53
2025-04-28T13:23:52
lhoestq
[]
fix ``` TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts' ```
true
3,023,311,163
https://api.github.com/repos/huggingface/datasets/issues/7539
https://github.com/huggingface/datasets/pull/7539
7,539
Fix IterableDataset state_dict shard_example_idx counting
closed
2
2025-04-27T20:41:18
2025-05-06T14:24:25
2025-05-06T14:24:24
Harry-Yang0518
[]
# Fix IterableDataset's state_dict shard_example_idx reporting ## Description This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed. The issue is in the `_iter_arrow` method of the `ArrowExamplesIterable` class where it updates the `shard_example_idx` state by the full length of the batch (`len(pa_table)`) even when we're only partway through processing the examples. ## Changes Modified the `_iter_arrow` method of `ArrowExamplesIterable` to: 1. Track the actual number of examples processed 2. Only increment the `shard_example_idx` by the number of examples actually yielded 3. Handle partial batches correctly ## How to Test I've included a simple test case that demonstrates the fix: ```python from datasets import Dataset # Create a test dataset ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=1) # Iterate through part of the dataset for idx, example in enumerate(ds): print(example) if idx == 2: # Stop after 3 examples (0, 1, 2) state_dict = ds.state_dict() print("Checkpoint state_dict:", state_dict) break # Before the fix, the output would show shard_example_idx: 6 # After the fix, it shows shard_example_idx: 3, correctly reflecting the 3 processed examples ``` ## Implementation Details 1. Added logic to track the number of examples actually seen in the current shard 2. Modified the state update to only count examples actually yielded 3. Improved handling of partial batches and skipped examples This fix ensures that checkpointing and resuming works correctly with exactly the expected number of examples, rather than skipping ahead to the end of the batch.
true
3,023,280,056
https://api.github.com/repos/huggingface/datasets/issues/7538
https://github.com/huggingface/datasets/issues/7538
7,538
`IterableDataset` drops samples when resuming from a checkpoint
closed
1
2025-04-27T19:34:49
2025-05-06T14:04:05
2025-05-06T14:03:42
mariosasko
[ "bug" ]
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted. In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch. Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement. The following is a minimal reproducer of the bug: ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node ds = Dataset.from_dict({"n": list(range(24))}) ds = ds.to_iterable_dataset(num_shards=4) world_size = 4 rank = 0 ds_rank = split_dataset_by_node(ds, rank, world_size) it = iter(ds_rank) examples = [] for idx, example in enumerate(it): examples.append(example) if idx == 2: state_dict = ds_rank.state_dict() break ds_rank.load_state_dict(state_dict) it_resumed = iter(ds_rank) examples_resumed = examples[:] for example in it: examples.append(example) for example in it_resumed: examples_resumed.append(example) print("ORIGINAL ITER EXAMPLES:", examples) print("RESUMED ITER EXAMPLES:", examples_resumed) ```
false
3,018,792,966
https://api.github.com/repos/huggingface/datasets/issues/7537
https://github.com/huggingface/datasets/issues/7537
7,537
`datasets.map(..., num_proc=4)` multi-processing fails
open
1
2025-04-25T01:53:47
2025-05-06T13:12:08
null
faaany
[]
The following code fails in python 3.11+ ```python tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) ``` Error log: ```bash Traceback (most recent call last): File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap self.run() File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.12/dist-packages/multiprocess/pool.py", line 114, in worker task = get() ^^^^^ File "/usr/local/lib/python3.12/dist-packages/multiprocess/queues.py", line 371, in get return _ForkingPickler.loads(res) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 327, in loads return load(file, ignore, **kwds) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 313, in load return Unpickler(file, ignore=ignore, **kwds).load() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 525, in load obj = StockUnpickler.load(self) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 659, in _create_code if len(args) == 16: return CodeType(*args) ^^^^^^^^^^^^^^^ TypeError: code() argument 13 must be str, not int ``` After upgrading dill to the latest 0.4.0 with "pip install --upgrade dill", it can pass. So it seems that there is a compatibility issue between dill 0.3.4 and python 3.11+, because python 3.10 works fine. Is the dill deterministic issue mentioned in https://github.com/huggingface/datasets/blob/main/setup.py#L117) still valid? Any plan to unpin?
false
3,018,425,549
https://api.github.com/repos/huggingface/datasets/issues/7536
https://github.com/huggingface/datasets/issues/7536
7,536
[Errno 13] Permission denied: on `.incomplete` file
closed
4
2025-04-24T20:52:45
2025-05-06T13:05:01
2025-05-06T13:05:01
ryan-clancy
[]
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed. Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)? ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset builder_instance.download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare self._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare super()._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators downloaded_files = dl_manager.download(files) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download downloaded_path_or_paths = map_nested( .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched return thread_map( .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) .venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__ for obj in iterable: ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator yield _result_or_cancel(fs.pop()) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel return fut.result(timeout) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result return self.__get_result() ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result raise self._exception ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run result = self.fn(*self.args, **self.kwargs) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single out = cached_path(url_or_filename, download_config=download_config) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path output_path = get_from_cache( .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get fs.get_file(path, temp_file.name, callback=callback) .venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper return sync(self.loop, func, *args, **kwargs) .venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync raise return_result .venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner result[0] = await coro _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70> rpath = '<my-bucket>/<my-prefix>/img_1.jpg' lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0> version_id = None, kwargs = {} _open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120> body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0> content_length = 521923, failed_reads = 0, bytes_read = 0 async def _get_file( self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs ): if os.path.isdir(lpath): return bucket, key, vers = self.split_path(rpath) async def _open_file(range: int): kw = self.req_kw.copy() if range: kw["Range"] = f"bytes={range}-" resp = await self._call_s3( "get_object", Bucket=bucket, Key=key, **version_id_kw(version_id or vers), **kw, ) return resp["Body"], resp.get("ContentLength", None) body, content_length = await _open_file(range=0) callback.set_size(content_length) failed_reads = 0 bytes_read = 0 try: > with open(lpath, "wb") as f0: E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' .venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError ``` ### Steps to reproduce the bug I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs. ### Expected behavior The dataset loads properly with no permission denied error. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.12.10 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
false
3,018,289,872
https://api.github.com/repos/huggingface/datasets/issues/7535
https://github.com/huggingface/datasets/pull/7535
7,535
Change dill version in requirements
open
1
2025-04-24T19:44:28
2025-05-19T14:51:29
null
JGrel
[]
Change dill version to >=0.3.9,<0.4.5 and check for errors
true
3,017,259,407
https://api.github.com/repos/huggingface/datasets/issues/7534
https://github.com/huggingface/datasets/issues/7534
7,534
TensorFlow RaggedTensor Support (batch-level)
open
4
2025-04-24T13:14:52
2025-06-30T17:03:39
null
Lundez
[ "enhancement" ]
### Feature request Hi, Currently datasets does not support RaggedTensor output on batch-level. When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV. Currently there's a error thrown saying that "Nested Data is not supported". It'd be very helpful if this was fixed! :) ### Motivation Enabling Object Detection pipelines for TensorFlow. ### Your contribution With guidance I'd happily help making the PR. The current implementation with DataCollator and later enforcing `np.array` is the problematic part (at the end of `np_get_batch` in `tf_utils.py`). As `numpy` don't support "Raggednes"
false
3,015,075,086
https://api.github.com/repos/huggingface/datasets/issues/7533
https://github.com/huggingface/datasets/pull/7533
7,533
Add custom fingerprint support to `from_generator`
open
3
2025-04-23T19:31:35
2025-07-10T09:29:35
null
simonreise
[]
This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function. `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset processed in a generator function is large enough. This PR allows user to pass a custom fingerprint (`dataset_id_suffix`) to be used as a suffix in a dataset name instead of the one generated by hashing the args. This PR is a possible solution of #7513
true
3,009,546,204
https://api.github.com/repos/huggingface/datasets/issues/7532
https://github.com/huggingface/datasets/pull/7532
7,532
Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation
closed
3
2025-04-22T00:23:13
2025-05-06T15:54:38
2025-05-06T15:54:38
Harry-Yang0518
[]
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for datasets stored in Arrow format. This addition is based on the discussion in (https://github.com/huggingface/datasets/issues/7457), where users noted the absence of this variable in the documentation despite its functionality. The update adds a new section to `cache.mdx` that explains how to use `HF_DATASETS_CACHE` with an example. This change aims to improve clarity and help users better manage their cache directories when working in shared environments or with limited local storage. Closes #7457.
true
3,008,914,887
https://api.github.com/repos/huggingface/datasets/issues/7531
https://github.com/huggingface/datasets/issues/7531
7,531
Deepspeed reward training hangs at end of training with Dataset.from_list
open
2
2025-04-21T17:29:20
2025-06-29T06:20:45
null
Matt00n
[]
There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a single GPU works without hangig. The issue persisted across a wide range of Deepspeed configs and training arguments. The issue went away when storing the exact same dataset as a JSON and using `dataset = load_dataset("json", ...)`. Here is my training script: ```python import pickle import os import random import warnings import torch from datasets import load_dataset, Dataset from transformers import AutoModelForSequenceClassification, AutoTokenizer from trl import RewardConfig, RewardTrainer, ModelConfig ####################################### Reward model ################################################# # Explicitly set arguments model_name_or_path = "Qwen/Qwen2.5-1.5B" output_dir = "Qwen2-0.5B-Reward-LoRA" per_device_train_batch_size = 2 num_train_epochs = 5 gradient_checkpointing = True learning_rate = 1.0e-4 logging_steps = 25 eval_strategy = "steps" eval_steps = 50 max_length = 2048 torch_dtype = "auto" trust_remote_code = False model_args = ModelConfig( model_name_or_path=model_name_or_path, model_revision=None, trust_remote_code=trust_remote_code, torch_dtype=torch_dtype, lora_task_type="SEQ_CLS", # Make sure task type is seq_cls ) training_args = RewardConfig( output_dir=output_dir, per_device_train_batch_size=per_device_train_batch_size, num_train_epochs=num_train_epochs, gradient_checkpointing=gradient_checkpointing, learning_rate=learning_rate, logging_steps=logging_steps, eval_strategy=eval_strategy, eval_steps=eval_steps, max_length=max_length, gradient_checkpointing_kwargs=dict(use_reentrant=False), center_rewards_coefficient = 0.01, fp16=False, bf16=True, save_strategy="no", dataloader_num_workers=0, # deepspeed="./configs/deepspeed_config.json", ) ################ # Model & Tokenizer ################ model_kwargs = dict( revision=model_args.model_revision, use_cache=False if training_args.gradient_checkpointing else True, torch_dtype=model_args.torch_dtype, ) tokenizer = AutoTokenizer.from_pretrained( model_args.model_name_or_path, use_fast=True ) model = AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, num_labels=1, trust_remote_code=model_args.trust_remote_code, **model_kwargs ) # Align padding tokens between tokenizer and model model.config.pad_token_id = tokenizer.pad_token_id # If post-training a base model, use ChatML as the default template if tokenizer.chat_template is None: model, tokenizer = setup_chat_format(model, tokenizer) if model_args.use_peft and model_args.lora_task_type != "SEQ_CLS": warnings.warn( "You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs" " Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT.", UserWarning, ) ############## # Load dataset ############## with open('./prefs.pkl', 'rb') as fh: loaded_data = pickle.load(fh) random.shuffle(loaded_data) dataset = [] for a_wins, a, b in loaded_data: if a_wins == 0: a, b = b, a dataset.append({'chosen': a, 'rejected': b}) dataset = Dataset.from_list(dataset) # Split the dataset into training and evaluation sets train_eval_split = dataset.train_test_split(test_size=0.15, shuffle=True, seed=42) # Access the training and evaluation datasets train_dataset = train_eval_split['train'] eval_dataset = train_eval_split['test'] ########## # Training ########## trainer = RewardTrainer( model=model, processing_class=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() ``` Replacing `dataset = Dataset.from_list(dataset)` with ```python with open('./prefs.json', 'w') as fh: json.dump(dataset, fh) dataset = load_dataset("json", data_files="./prefs.json", split='train') ``` resolves the issue.
false
3,007,452,499
https://api.github.com/repos/huggingface/datasets/issues/7530
https://github.com/huggingface/datasets/issues/7530
7,530
How to solve "Spaces stuck in Building" problems
closed
3
2025-04-21T03:08:38
2025-04-22T07:49:52
2025-04-22T07:49:52
ghost
[]
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized ### Steps to reproduce the bug Restart space / Factory rebuild cannot avoid it ### Expected behavior Fix this problem ### Environment info no requirements.txt can still happen python gradio spaces
false
3,007,118,969
https://api.github.com/repos/huggingface/datasets/issues/7529
https://github.com/huggingface/datasets/issues/7529
7,529
audio folder builder cannot detect custom split name
open
0
2025-04-20T16:53:21
2025-04-20T16:53:21
null
phineas-pta
[]
### Describe the bug when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test ### Steps to reproduce the bug i have the following folder structure ``` my_dataset/ β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ lorem.wav β”‚ β”œβ”€β”€ … β”‚ └── metadata.csv β”œβ”€β”€ test/ β”‚ β”œβ”€β”€ ipsum.wav β”‚ β”œβ”€β”€ … β”‚ └── metadata.csv β”œβ”€β”€ validation/ β”‚ β”œβ”€β”€ dolor.wav β”‚ β”œβ”€β”€ … β”‚ └── metadata.csv └── custom/ β”œβ”€β”€ sit.wav β”œβ”€β”€ … └── metadata.csv ``` using `ds = load_dataset("audiofolder", data_dir="/path/to/my_dataset")` ### Expected behavior i got `ds` with only 3 splits train/validation/test, whenever i rename train/validation/test folder it also disappear if i re-create `ds` ### Environment info - `datasets` version: 3.5.0 - Platform: Windows-11-10.0.26100-SP0 - Python version: 3.12.8 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
false
3,006,433,485
https://api.github.com/repos/huggingface/datasets/issues/7528
https://github.com/huggingface/datasets/issues/7528
7,528
Data Studio Error: Convert JSONL incorrectly
open
1
2025-04-19T13:21:44
2025-05-06T13:18:38
null
zxccade
[]
### Describe the bug Hi there, I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file. Could you help me address the issue? Many thanks, ### Steps to reproduce the bug The JSONL file of [V_STaR_test_release.jsonl](https://huggingface.co/datasets/V-STaR-Bench/V-STaR/blob/main/V_STaR_test_release.jsonl) has the correct values of every "bboxes" for each sample. But in the Data Studio, we can see that the values of "bboxes" have changed, and load the dataset via API will also get the wrong values. ### Expected behavior Fix the bug to correctly download my dataset. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-5.14.0-427.22.1.el9_4.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.16 - `huggingface_hub` version: 0.29.3 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2023.10.0
false
3,005,242,422
https://api.github.com/repos/huggingface/datasets/issues/7527
https://github.com/huggingface/datasets/issues/7527
7,527
Auto-merge option for `convert-to-parquet`
closed
4
2025-04-18T16:03:22
2025-07-18T19:09:03
2025-07-18T19:09:03
klamike
[ "enhancement" ]
### Feature request Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool. ### Motivation Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website. ### Your contribution Happy to look into submitting a PR if this is of interest to maintainers.
false
3,005,107,536
https://api.github.com/repos/huggingface/datasets/issues/7526
https://github.com/huggingface/datasets/issues/7526
7,526
Faster downloads/uploads with Xet storage
open
0
2025-04-18T14:46:42
2025-05-12T12:09:09
null
lhoestq
[]
![Image](https://github.com/user-attachments/assets/6e247f4a-d436-4428-a682-fe18ebdc73a9) ## Xet is out ! Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface.co/posts/jsulz/911431940353906). See more information on the HF blog: https://huggingface.co/blog/xet-on-the-hub You can already enable Xet on Hugging Face account to benefit from faster downloads and uploads :) We finalized an official integration with the `huggingface_hub` library that means you get the benefits of Xet without any significant changes to your current workflow. ## Previous versions of `datasets` For older versions of `datasets` you might see this warning in `push_to_hub()`: ``` Uploading files as bytes or binary IO objects is not supported by Xet Storage. ``` This means the `huggingface_hub` + Xet integration isn't enabled for your version of `datasets`. You can fix this by updating to `datasets>=3.6.0` and `huggingface_hub>=0.31.0` ``` pip install -U datasets huggingface_hub ``` ## The future Stay tuned for more Xet optimizations, especially on [Xet-optimized Parquet](https://huggingface.co/blog/improve_parquet_dedupe)
false
3,003,032,248
https://api.github.com/repos/huggingface/datasets/issues/7525
https://github.com/huggingface/datasets/pull/7525
7,525
Fix indexing in split commit messages
closed
1
2025-04-17T17:06:26
2025-04-28T14:26:27
2025-04-28T14:26:27
klamike
[]
When a large commit is split up, it seems the commit index in the message is zero-based while the total number is one-based. I came across this running `convert-to-parquet` and was wondering why there was no `6-of-6` commit. This PR fixes that by adding one to the commit index, so both are one-based. Current behavior: <img width="463" alt="Screenshot 2025-04-17 at 1 00 17β€―PM" src="https://github.com/user-attachments/assets/7f3d389e-cb92-405d-a3c2-f2b1cdf0cb79" />
true
3,002,067,826
https://api.github.com/repos/huggingface/datasets/issues/7524
https://github.com/huggingface/datasets/pull/7524
7,524
correct use with polars example
closed
0
2025-04-17T10:19:19
2025-04-28T13:48:34
2025-04-28T13:48:33
SiQube
[]
null
true
2,999,616,692
https://api.github.com/repos/huggingface/datasets/issues/7523
https://github.com/huggingface/datasets/pull/7523
7,523
mention av in video docs
closed
1
2025-04-16T13:11:12
2025-04-16T13:13:45
2025-04-16T13:11:42
lhoestq
[]
null
true
2,998,169,017
https://api.github.com/repos/huggingface/datasets/issues/7522
https://github.com/huggingface/datasets/pull/7522
7,522
Preserve formatting in concatenated IterableDataset
closed
1
2025-04-16T02:37:33
2025-05-19T15:07:38
2025-05-19T15:07:37
francescorubbo
[]
Fixes #7515
true