id
int64
599M
3.18B
number
int64
1
7.65k
title
stringlengths
1
290
state
stringclasses
2 values
body
stringlengths
0
228k
is_pull_request
bool
1 class
created_at
stringdate
2020-04-14 10:18:02
2025-06-26 12:23:48
updated_at
stringdate
2020-04-27 16:04:17
2025-06-26 14:02:38
closed_at
stringlengths
20
20
user_login
stringlengths
3
26
author_association
stringclasses
4 values
pr_url
stringlengths
46
49
pr_merged_at
stringlengths
20
20
comments_count
int64
0
70
reactions_total
int64
0
61
reactions_plus1
int64
0
39
reactions_heart
int64
0
22
draft
bool
2 classes
locked
bool
1 class
labels
listlengths
0
4
html_url
stringlengths
46
51
is_pr_url
bool
2 classes
comments
listlengths
0
30
3,178,952,517
7,647
loading mozilla-foundation--common_voice_11_0 fails
open
### Describe the bug Hello everyone, i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer ``` import datasets datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True) ``` and it fails with ``` File ~/opt/envs/.../lib/python3.10/site-packages/datasets/utils/file_utils.py:827, in _add_retries_to_file_obj_read_method.<locals>.read_with_retries(*args, **kwargs) 825 for retry in range(1, max_retries + 1): 826 try: --> 827 out = read(*args, **kwargs) 828 break 829 except ( 830 _AiohttpClientError, 831 asyncio.TimeoutError, 832 requests.exceptions.ConnectionError, 833 requests.exceptions.Timeout, 834 ) as err: File /usr/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final) 319 def decode(self, input, final=False): 320 # decode input (taking the buffer into account) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call 324 self.buffer = data[consumed:] UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` When i remove streaming then everything is good but i need `streaming=True` ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True) ``` ### Expected behavior Expected that it will download dataset ### Environment info datasets==3.6.0 python3.10 on all platforms linux/win/mac
true
2025-06-26T12:23:48Z
2025-06-26T12:24:14Z
null
pavel-esir
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7647
false
[]
3,178,036,854
7,646
Update data_files.py #7066
open
Fixes #7066 This PR introduces automatic **subset-level grouping** for folder-based dataset builders by: 1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes). 2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one split per subset. 3. Adding unit tests for the grouping function. 4. Updating the documentation to describe this new behavior under `docs/source/repository_structure.mdx`. --- ### Motivation Datasets with files like: ``` train0.jsonl train1.jsonl animals.jsonl metadata.jsonl ``` will now be **automatically grouped** as: - `"train"` subset → `train0.jsonl`, `train1.jsonl` - `"animals"` subset → `animals.jsonl` - `"metadata"` subset → `metadata.jsonl` This enables structured multi-subset loading even when the dataset doesn't follow traditional `train/validation/test` split conventions. --- ### Files Changed - `src/datasets/data_files.py`: added `group_files_by_subset()` utility - `src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py`: grouped files before yielding splits - `tests/test_data_files.py`: added unit test `test_group_files_by_subset` - `docs/source/repository_structure.mdx`: documented subset grouping for maintainers and users --- ### Benefits - More flexible and robust dataset split logic - Enables logical grouping of user-uploaded files without nested folder structure - Backward-compatible with all existing folder-based configs --- Ready for review ✅
true
2025-06-26T07:01:37Z
2025-06-26T14:02:38Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7646
null
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7646
true
[ "It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!", "Hi ! I believ...
3,176,810,164
7,645
`ClassLabel` docs: Correct value for unknown labels
open
This small change fixes the documentation to to be compliant with what happens in `encode_example`. https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129
true
2025-06-25T20:01:35Z
2025-06-25T20:01:35Z
null
l-uuz
NONE
https://github.com/huggingface/datasets/pull/7645
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7645
true
[]
3,176,363,492
7,644
fix sequence ci
closed
fix error from https://github.com/huggingface/datasets/pull/7643
true
2025-06-25T17:07:55Z
2025-06-25T17:10:30Z
2025-06-25T17:08:01Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7644
2025-06-25T17:08:01Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7644
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,176,354,431
7,643
Backward compat sequence instance
closed
useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate
true
2025-06-25T17:05:09Z
2025-06-25T17:07:40Z
2025-06-25T17:05:44Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7643
2025-06-25T17:05:43Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7643
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,176,025,890
7,642
fix length for ci
closed
true
2025-06-25T15:10:38Z
2025-06-25T15:11:53Z
2025-06-25T15:11:51Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7642
2025-06-25T15:11:51Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7642
true
[]
3,175,953,405
7,641
update docs and docstrings
closed
true
2025-06-25T14:48:58Z
2025-06-25T14:51:46Z
2025-06-25T14:49:33Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7641
2025-06-25T14:49:33Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7641
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,175,914,924
7,640
better features repr
closed
following the addition of List in #7634 before: ```python In [3]: ds.features Out[3]: {'json': {'id': Value(dtype='string', id=None), 'metadata:transcript': [{'end': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None), 'transcript': Value(dtype='string', id=None), 'words': [{'end': Value(dtype='float64', id=None), 'score': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None), 'word': Value(dtype='string', id=None)}]}], 'metadata:vad': [{'end': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None)}]}, 'mp4': Value(dtype='binary', id=None), 'npz': {'boxes_and_keypoints:box': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'boxes_and_keypoints:is_valid_box': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'boxes_and_keypoints:keypoints': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'movement:EmotionArousalToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:EmotionValenceToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:FAUToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:FAUValue': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:alignment_head_rotation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:alignment_translation': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'movement:emotion_arousal': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:emotion_scores': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:emotion_valence': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:expression': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:frame_latent': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:gaze_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:head_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:hypernet_features': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:is_valid': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'smplh:body_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'smplh:global_orient': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'smplh:is_valid': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'smplh:left_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'smplh:right_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'smplh:translation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None)}, 'wav': Audio(sampling_rate=None, mono=True, decode=True, id=None), '__key__': Value(dtype='string', id=None), '__url__': Value(dtype='string', id=None)} ``` after: ```python In [3]: ds.features Out[3]: {'json': {'id': Value('string'), 'metadata:transcript': List({'end': Value('float64'), 'start': Value('float64'), 'transcript': Value('string'), 'words': List({'end': Value('float64'), 'score': Value('float64'), 'start': Value('float64'), 'word': Value('string')})}), 'metadata:vad': List({'end': Value('float64'), 'start': Value('float64')})}, 'mp4': Value('binary'), 'npz': {'boxes_and_keypoints:box': List(List(Value('float32'))), 'boxes_and_keypoints:is_valid_box': List(Value('bool')), 'boxes_and_keypoints:keypoints': List(List(List(Value('float32')))), 'movement:EmotionArousalToken': List(List(Value('float32'))), 'movement:EmotionValenceToken': List(List(Value('float32'))), 'movement:FAUToken': List(List(Value('float32'))), 'movement:FAUValue': List(List(Value('float32'))), 'movement:alignment_head_rotation': List(List(Value('float32'))), 'movement:alignment_translation': List(List(List(Value('float32')))), 'movement:emotion_arousal': List(List(Value('float32'))), 'movement:emotion_scores': List(List(Value('float32'))), 'movement:emotion_valence': List(List(Value('float32'))), 'movement:expression': List(List(Value('float32'))), 'movement:frame_latent': List(List(Value('float32'))), 'movement:gaze_encodings': List(List(Value('float32'))), 'movement:head_encodings': List(List(Value('float32'))), 'movement:hypernet_features': List(List(Value('float32'))), 'movement:is_valid': List(List(Value('float32'))), 'smplh:body_pose': List(List(List(Value('float32')))), 'smplh:global_orient': List(List(Value('float32'))), 'smplh:is_valid': List(Value('bool')), 'smplh:left_hand_pose': List(List(List(Value('float32')))), 'smplh:right_hand_pose': List(List(List(Value('float32')))), 'smplh:translation': List(List(Value('float32')))}, 'wav': Audio(sampling_rate=None, decode=True, stream_index=None), '__key__': Value('string'), '__url__': Value('string')} ```
true
2025-06-25T14:37:32Z
2025-06-25T14:46:47Z
2025-06-25T14:46:45Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7640
2025-06-25T14:46:45Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7640
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,175,616,169
7,639
fix save_infos
closed
true
2025-06-25T13:16:26Z
2025-06-25T13:19:33Z
2025-06-25T13:16:33Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7639
2025-06-25T13:16:33Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7639
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,172,645,391
7,638
Add ignore_decode_errors option to Image feature for robust decoding #7612
open
This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612. ## 🔧 What was added - A new boolean field: `ignore_decode_errors` (default: `False`) - If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error ```python features = Features({ "image": Image(decode=True, ignore_decode_errors=True), }) ```` This enables robust iteration over potentially corrupted datasets — especially useful when streaming datasets like WebDataset or image-heavy public sets where sample corruption is common. ## 🧪 Behavior * If `ignore_decode_errors=False` (default), decoding behaves exactly as before * If `True`, decoding errors are caught, and a warning is emitted: ``` [Image.decode_example] Skipped corrupted image: ... ``` ## 🧵 Linked issue Closes #7612 Let me know if you'd like a follow-up test PR. Happy to write one!
true
2025-06-24T16:47:51Z
2025-06-24T16:48:03Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7638
null
1
1
1
0
false
false
[]
https://github.com/huggingface/datasets/pull/7638
true
[ "cc @lhoestq" ]
3,171,883,522
7,637
Introduce subset_name as an alias of config_name
open
### Feature request Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata). ### Motivation The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology. I have repeatedly received questions from users trying to understand what "config" means, and why it doesn’t match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility. This change would: - Align terminology across the Hub UI and datasets codebase - Reduce user confusion, especially for newcomers - Make documentation and examples more intuitive
true
2025-06-24T12:49:01Z
2025-06-24T12:53:03Z
null
albertvillanova
MEMBER
null
null
1
1
0
1
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7637
false
[ "I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly." ]
3,170,878,167
7,636
"open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable"
open
When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable" ```python print("open" in globals()["__builtins__"]) ``` Traceback (most recent call last): File "./main.py", line 2, in <module> print("open" in globals()["__builtins__"]) ^^^^^^^^^^^^^^^^^^^^^^ TypeError: argument of type 'module' is not iterable But this code runs fine in datasets, I don't understand why [src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96)
true
2025-06-24T08:09:39Z
2025-06-26T04:08:23Z
null
kuanyan9527
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7636
false
[]
3,170,486,408
7,635
Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0)
open
This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference. This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` instead of `"float"`. ### 🔍 What was happening: When the JSON loader falls back to `pandas_read_json()` (after `pa.read_json()` fails), pandas/Arrow can coerce float values to integers if all values are integer-like (e.g., `0.0 == 0`). ### ✅ What this PR does: - Adds a check in the fallback path of `_generate_tables()` - Ensures that columns made entirely of floats are preserved as `"float64"` even if they are integer-like (e.g. `0.0`, `1.0`) - This prevents loss of float semantics when creating the Arrow table ### 🧪 Reproducible Example: ```json [{"col": 0.0}, {"col": 1.0}, {"col": 2.0}] ```` Previously loaded as: * `int` Now correctly loaded as: * `float` Fixes #6937
true
2025-06-24T06:16:48Z
2025-06-24T06:16:48Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7635
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7635
true
[]
3,169,389,653
7,634
Replace Sequence by List
closed
Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list. This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead. before: `Sequence(Value("int64"))` or `[Value("int64")]` now: `List(Value("int64"))` This PR conserves full backward compatibility. And it's a good occasion with the release of 4.0.0
true
2025-06-23T20:35:48Z
2025-06-25T13:59:13Z
2025-06-25T13:59:11Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7634
2025-06-25T13:59:11Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7634
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,168,399,637
7,633
Proposal: Small Tamil Discourse Coherence Dataset.
open
I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages. - Size: 50 samples - Format: CSV with columns (text1, text2, label) - Use case: Training NLP models for coherence I’ll use GitHub’s web editor and Google Colab. Please confirm if this fits.
true
2025-06-23T14:24:40Z
2025-06-23T14:24:40Z
null
bikkiNitSrinagar
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7633
false
[]
3,168,283,589
7,632
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
open
### Feature request Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples are common. reference : https://discuss.huggingface.co/t/handle-errors-when-loading-images-404-corrupted-etc/50318/5 https://discuss.huggingface.co/t/handling-non-existing-url-in-image-dataset-while-cast-column/69185 Proposed Feature Introduce a mechanism (e.g., a continue_on_error=True flag or global error handling mode) in Image(decode=True) that: Skips invalid images and sets them as None, or Logs the error but allows the rest of the dataset to be processed without interruption. Example Usage from datasets import load_dataset, Image dataset = load_dataset("my_dataset") dataset = dataset.cast_column("image", Image(decode=True, continue_on_error=True)) Benefits Ensures robust large-scale image dataset processing. Improves developer productivity by avoiding custom retry/error-handling code. Aligns with best practices in dataset preprocessing pipelines that tolerate minor data corruption. Potential Implementation Options Internally wrap the decoding in a try/except block. Return None or a placeholder on failure. Optionally allow custom error callbacks or logging. ### Motivation Robustness: Large-scale image datasets often contain a small fraction of corrupt files or unreachable URLs. Halting on the first error forces users to write custom workarounds or preprocess externally. Simplicity: A built-in flag removes boilerplate try/except logic around every decode step. Performance: Skipping invalid samples inline is more efficient than a two-pass approach (filter then decode). ### Your contribution 1. API Change Extend datasets.features.Image(decode=True) to accept continue_on_error: bool = False. 2. Behavior If continue_on_error=False (default), maintain current behavior: any decode error raises an exception. If continue_on_error=True, wrap decode logic in try/except: On success: store the decoded image. On failure: log a warning (e.g., via logging.warning) and set the field to None (or a sentinel value). 3. Optional Enhancements Allow a callback hook: Image(decode=True, continue_on_error=True, on_error=lambda idx, url, exc: ...) Emit metrics or counts of skipped images.
true
2025-06-23T13:49:24Z
2025-06-23T16:26:53Z
null
ganiket19
NONE
null
null
0
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7632
false
[]
3,165,127,657
7,631
Pass user-agent from DownloadConfig into fsspec storage_options
open
Fixes part of issue #6046 ### Problem The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests. ### Solution Added support for injecting the `user-agent` into `storage_options["headers"]` within `_prepare_single_hop_path_and_storage_options()` based on the `protocol`. Now, when using `hf://`, `http://`, or `https://`, the custom user-agent is passed automatically. ### Code Location Modified: - `src/datasets/utils/file_utils.py` Used `get_datasets_user_agent(...)` to ensure proper formatting and fallback logic.
true
2025-06-21T14:22:25Z
2025-06-21T14:25:28Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7631
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7631
true
[ "- This PR assumes that `HfFileSystem` in `huggingface_hub` supports receiving `headers` in `storage_options`. If not, a follow-up PR can be opened to add this support to `HfFileSystem.__init__`.\r\n- No test was added for this since it’s a config passthrough. If needed, I’d be happy to add one." ]
3,164,650,900
7,630
[bug] resume from ckpt skips samples if .map is applied
open
### Describe the bug resume from ckpt skips samples if .map is applied Maybe related: https://github.com/huggingface/datasets/issues/7538 ### Steps to reproduce the bug ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node # Create dataset with map transformation def create_dataset(): ds = Dataset.from_dict({"id": list(range(100))}) ds = ds.to_iterable_dataset(num_shards=4) ds = ds.map(lambda x: x) #comment it out to get desired behavior ds = split_dataset_by_node(ds, rank=0, world_size=2) return ds ds = create_dataset() # Iterate and save checkpoint after 10 samples it = iter(ds) for idx, sample in enumerate(it): if idx == 9: # Checkpoint after 10 samples checkpoint = ds.state_dict() print(f"Checkpoint saved at sample: {sample['id']}") break # Continue with original iterator original_next_samples = [] for idx, sample in enumerate(it): original_next_samples.append(sample["id"]) if idx >= 4: break # Resume from checkpoint ds_new = create_dataset() ds_new.load_state_dict(checkpoint) # Get samples from resumed iterator it_new = iter(ds_new) resumed_next_samples = [] for idx, sample in enumerate(it_new): resumed_next_samples.append(sample["id"]) if idx >= 4: break print(f"\nExpected next samples: {original_next_samples}") print(f"Actual next samples: {resumed_next_samples}") print( f"\n❌ BUG: {resumed_next_samples[0] - original_next_samples[0]} samples were skipped!" ) ``` With map ``` Checkpoint saved at sample: 9 Expected next samples: [10, 11, 12, 13, 14] Actual next samples: [50, 51, 52, 53, 54] ❌ BUG: 40 samples were skipped! ``` ### Expected behavior without map ``` Expected next samples: [10, 11, 12, 13, 14] Actual next samples: [10, 11, 12, 13, 14] ❌ BUG: 0 samples were skipped! ``` ### Environment info datasets == 3.6.0
true
2025-06-21T01:50:03Z
2025-06-22T06:38:25Z
null
felipemello1
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7630
false
[ "#selfassign\n\nHi! I'd like to work on this issue." ]
3,161,169,782
7,629
Add test for `as_iterable_dataset()` method in DatasetBuilder
open
This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628. The test: - Loads a builder using `load_dataset_builder("c4", "en")` - Runs `download_and_prepare()` - Streams examples using `builder.as_iterable_dataset(split="train[:100]")` - Verifies streamed examples contain the "text" field This ensures that the builder correctly streams data from cached Arrow files.
true
2025-06-19T19:23:55Z
2025-06-19T19:23:55Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7629
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7629
true
[]
3,161,156,461
7,628
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
open
This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481. It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory. This is useful for large-scale training scenarios where memory is constrained. A test has also been added in `test_builder.py`. Related to: #5481
true
2025-06-19T19:15:41Z
2025-06-19T19:15:41Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7628
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7628
true
[]
3,160,544,390
7,627
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
closed
Hi, I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_ Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot! From what I understand, it is loading the images into cache then building the dataset. – Please find bellow the execution screenshot – Is there a way to optimize this or am I doing something wrong? Thanks! ![Image](https://github.com/user-attachments/assets/c79257c8-f023-42a9-9e6f-0898b3ea93fe)
true
2025-06-19T14:28:41Z
2025-06-23T12:39:10Z
2025-06-23T12:39:10Z
Thunderhead-exe
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7627
false
[ "### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passi...
3,159,322,138
7,626
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
open
## Summary This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified. ## What’s Implemented - Injected logic at the end of `Dataset.map()` to: - Identify untouched columns not in `input_columns` or `remove_columns` - Select those columns from the original dataset - Concatenate them with the transformed result using `pyarrow.concat_tables` ## Example Behavior ```python ds = Dataset.from_dict({"a": [1, 2], "b": [3, 4]}) ds2 = ds.map(lambda x: {"c": x["a"] + 10}, input_columns=["a"], remove_columns=["a"]) print(ds2.column_names) # Output: ['b', 'c'] ```` Column `b` is reused from the original dataset. ## Notes * This keeps disk usage and caching minimal by avoiding full dataset duplication. * Only triggered when `input_columns` is set. --- cc @lhoestq @mariosasko for review 🙂
true
2025-06-19T07:41:45Z
2025-06-26T06:43:16Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7626
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7626
true
[]
3,159,016,001
7,625
feat: Add h5folder dataset loader for HDF5 support
open
### Related Issue Closes #3113 ### What does this PR do? This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format. It allows users to do: ```python from datasets import load_dataset dataset = load_dataset("h5folder", data_dir="path/to/") ```` ### 🧩 Design Overview * Implemented inside `datasets/packaged_modules/h5folder/h5folder.py` * Based on the `GeneratorBasedBuilder` API * Uses `h5py` to read HDF5 files and yield examples * Expects datasets such as `id`, `data`, and `label` inside `data.h5` * Converts numpy arrays to Python types before yielding ### 🧪 Example `.h5` Structure (for local testing) ```python import h5py import numpy as np with h5py.File("data.h5", "w") as f: f.create_dataset("id", data=np.arange(100)) f.create_dataset("data", data=np.random.randn(100, 10)) f.create_dataset("label", data=np.random.randint(0, 2, size=100)) ``` ### ✅ Testing - The loader logic follows the structure of existing modules like `imagefolder` - Will rely on Hugging Face CI to validate integration - Manually testing planned once merged or during feedback ### 📁 Files Added * `datasets/src/datasets/packaged_modules/h5folder/h5folder.py` ### 📌 Component(s) Affected * `area/datasets` * `area/load` ### 📦 Release Note Classification * `rn/feature` – Adds support for loading `.h5` datasets via `load_dataset("h5folder", ...)` --- Let me know if any changes or improvements are needed — happy to iterate. Thanks for reviewing!
true
2025-06-19T05:39:10Z
2025-06-26T05:44:26Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7625
null
3
2
0
2
false
false
[]
https://github.com/huggingface/datasets/pull/7625
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7625). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I guess test failed cause import os, import h5py, and import datasets lines are not alp...
3,156,136,624
7,624
#Dataset Make "image" column appear first in dataset preview UI
closed
Hi! #Dataset I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub. However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve. I have a couple of questions: Is there a way to force the dataset card to display the `"image"` column first? Is there currently any way to control or influence the column order in the dataset preview UI? Does the order of keys in the .jsonl file or the features argument affect the display order? Thanks again for your time and help! :blush:
true
2025-06-18T09:25:19Z
2025-06-20T07:46:43Z
2025-06-20T07:46:43Z
jcerveto
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7624
false
[ "Hi ! It should follow the same order as the order of the keys in the metadata file", "Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY" ]
3,154,519,684
7,623
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
closed
### Related Issues/PRs Fixes #6152 --- ### What changes are proposed in this pull request? This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.). --- ### Why this change? Previously, when calling: ```python load_dataset("audiofolder") ```` without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to: * Long loading times * Unexpected behavior (e.g., scanning unrelated files) This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method — keeping the logic localized to the specific builder instead of a generic loader function. --- ### How is this PR tested? * ✅ Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` → a `ValueError` is now raised early. * ✅ Existing functionality (with valid input) remains unaffected. --- ### Does this PR require documentation update? * [x] No --- ### Release Notes #### Is this a user-facing change? * [x] Yes > Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory. --- #### What component(s) does this PR affect? * [x] `area/datasets` * [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified? * [x] `rn/bug-fix` - A user-facing bug fix --- #### Should this be included in the next patch release? * [x] Yes
true
2025-06-17T19:16:34Z
2025-06-18T14:18:41Z
2025-06-18T14:18:41Z
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7623
2025-06-18T14:18:41Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7623
true
[ "@lhoestq Moved the logic to FolderBasedBuilder._info() as discussed in previous PR (#7618). Let me know if anything else is needed — happy to update!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7623). All of your documentation changes will be reflected on that endpoin...
3,154,398,557
7,622
Guard against duplicate builder_kwargs/config_kwargs in load_dataset_…
open
…builder (#4910 ) ### What does this PR do? Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`. ### Implementation details - Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` and `config_kwargs` - Wrote a unit test in `tests/test_load_duplicate_keys.py` to verify the exception is raised correctly ### Fixes Closes #4910 ### Reviewers @zach-huggingface @SunMarc Would appreciate your review if you have time - thanks!
true
2025-06-17T18:28:35Z
2025-06-17T18:38:56Z
null
Shohail-Ismail
NONE
https://github.com/huggingface/datasets/pull/7622
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7622
true
[]
3,153,780,963
7,621
minor docs data aug
closed
true
2025-06-17T14:46:57Z
2025-06-17T14:50:28Z
2025-06-17T14:47:11Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7621
2025-06-17T14:47:11Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7621
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,153,565,183
7,620
Fixes in docs
closed
before release 4.0 (I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`)
true
2025-06-17T13:41:54Z
2025-06-17T13:58:26Z
2025-06-17T13:58:24Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7620
2025-06-17T13:58:24Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7620
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7620). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,153,058,517
7,619
`from_list` fails while `from_generator` works for large datasets
open
### Describe the bug I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`. ### Steps to reproduce the bug #### Snippet A (crashes) ```py from tqdm.auto import tqdm import numpy as np import datasets def data_generator(): for i in tqdm(range(10_000_000)): length = np.random.randint(2048) series = np.random.rand(length) yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")} data_list = list(data_generator()) ds = datasets.Dataset.from_list(data_list) ``` The last line crashes with ``` ArrowInvalid: Value 2147483761 too large to fit in C integer type ``` #### Snippet B (works) ```py from tqdm.auto import tqdm import numpy as np import datasets def data_generator(): for i in tqdm(range(10_000_000)): length = np.random.randint(2048) series = np.random.rand(length) yield {"target": series, "item_id": str(i), "start": np.datetime64("2000", "ms")} ds = datasets.Dataset.from_generator(data_generator) ``` ### Expected behavior I expected both the approaches to work or to fail similarly. ### Environment info ``` - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.32.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0 ```
true
2025-06-17T10:58:55Z
2025-06-18T09:29:24Z
null
abdulfatir
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7619
false
[ "@lhoestq any thoughts on this? " ]
3,148,912,897
7,618
fix: raise error when folder-based datasets are loaded without data_dir or data_files
open
### Related Issues/PRs <!-- Uncomment 'Resolve' if this PR can close the linked items. --> <!-- Resolve --> #6152 --- ### What changes are proposed in this pull request? This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior. **Before this fix**: - When `data_dir` or `data_files` were not provided, the loader defaulted to the current working directory. - This caused unexpected behavior like: - Long loading times - Scanning unintended local files **Now**: - If both `data_dir` and `data_files` are missing, a `ValueError` is raised early with a helpful message. --- ### How is this PR tested? - [x] Manual test via `load_dataset("audiofolder")` with missing `data_dir` - [ ] Existing unit tests (should not break any) - [ ] New tests (if needed, maintainers can guide) --- ### Does this PR require documentation update? - [x] No. You can skip the rest of this section. --- ### Release Notes #### Is this a user-facing change? - [x] Yes. Give a description of this change to be included in the release notes for users. > Adds early error handling for folder-based datasets when neither `data_dir` nor `data_files` is specified, avoiding unintended resolution to the current directory. #### What component(s), interfaces, languages, and integrations does this PR affect? Components: - [x] `area/datasets` - [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified in the release notes? Choose one: - [x] `rn/bug-fix` - A user-facing bug fix worth mentioning in the release notes --- #### Should this PR be included in the next patch release? - [x] Yes (this PR will be cherry-picked and included in the next patch release)
true
2025-06-16T07:43:59Z
2025-06-16T12:13:26Z
null
ArjunJagdale
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7618
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7618
true
[ "Great ! Since this logic is specific to one builder class maybe this check can be in the class definition ? I think you can put it in FolderBasedBuilder's `_info()` method." ]
3,148,102,085
7,617
Unwanted column padding in nested lists of dicts
closed
```python from datasets import Dataset dataset = Dataset.from_dict({ "messages": [ [ {"a": "...",}, {"b": "...",}, ], ] }) print(dataset[0]) ``` What I get: ``` {'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]} ``` What I want: ``` {'messages': [{'a': '...'}, {'b': '...'}]} ``` Is there an easy way to automatically remove these auto-filled null/none values? If not, I probably need a recursive none exclusion function, don't I? Datasets 3.6.0
true
2025-06-15T22:06:17Z
2025-06-16T13:43:31Z
2025-06-16T13:43:31Z
qgallouedec
MEMBER
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7617
false
[ "Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(examp...
3,144,506,665
7,616
Torchcodec decoding
closed
Closes #7607 ## New signatures ### Audio ```python Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None) Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict Audio.decode_example(self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None) -> "AudioDecoder": ``` ### Video ```python Video(decode: bool = True, stream_index: Optional[int] = None, dimension_order: Literal['NCHW', 'NHWC'] = 'NCHW', num_ffmpeg_threads: int = 1, device: Optional[Union[str, "torch.device"]] = 'cpu', seek_mode: Literal['exact', 'approximate'] = 'exact') Video.encode_example(self, value: Union[str, bytes, bytearray, Example, np.ndarray, "VideoDecoder"]) -> Example: Video.decode_example(self, value: Union[str, Example], token_per_repo_id: Optional[dict[str, Union[bool, str]]] = None, ) -> "VideoDecoder": ``` ## Notes Audio features constructor takes in 1 new optional param stream_index which is passed to the AudioDecoder constructor to select the stream index of a file. Audio feature can now take in torchcodec.decoders.AudioDecoder as input to encode_example() Audio feature decode_example() returns torchcodec.decoders.AudioDecoder Video feature constructor takes in 5 new optional params stream_index, dimension_order, num_ffmpeg_threads, device, seek_mode all of which are passed to VideoDecoder constructor Video feature decode_example() returns torchcodec.decoders.VideoDecoder Video feature can now take in torchcodec.decoders.VideoDecoder as input to encode_example() All test cases have been updated to reflect these changes All documentation has also been updated to reflect these changes. Both VideoDecoder and AudioDecoder when formatted with (np_formatter, tf_formatter, etc) will ignore the type and return themselves. Formatting test cases were updated accordingly to reflect this. (Pretty simple to make this not the case if we want though) ## Errors This test case from `tests/packaged_modules/test_audiofolder.py` ```python @require_librosa @require_sndfile @pytest.mark.parametrize("streaming", [False, True]) def test_data_files_with_metadata_and_archives(streaming, cache_dir, data_files_with_zip_archives): audiofolder = AudioFolder(data_files=data_files_with_zip_archives, cache_dir=cache_dir) audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() for split, data_files in data_files_with_zip_archives.items(): num_of_archives = len(data_files) # the metadata file is inside the archive expected_num_of_audios = 2 * num_of_archives assert split in datasets dataset = list(datasets[split]) assert len(dataset) == expected_num_of_audios # make sure each sample has its own audio (all arrays are different) and metadata assert ( sum(np.array_equal(dataset[0]["audio"].get_all_samples().data.numpy(), example["audio"].get_all_samples().data.numpy()) for example in dataset[1:]) == 0 ) assert len({example["text"] for example in dataset}) == expected_num_of_audios assert all(example["text"] is not None for example in dataset) ``` Fails now because AudioDecoder needs to access the files after the lines below are run, but there seems to be some context issues. The file the decoder is trying to read is closed before the decoder gets the chance to decode it. ```python audiofolder.download_and_prepare() datasets = audiofolder.as_streaming_dataset() if streaming else audiofolder.as_dataset() ```
true
2025-06-13T19:06:07Z
2025-06-19T18:25:49Z
2025-06-19T18:25:49Z
TyTodd
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7616
2025-06-19T18:25:48Z
5
1
0
1
false
false
[]
https://github.com/huggingface/datasets/pull/7616
true
[ "@lhoestq any updates on when this will be merged? Let me know if theres anything you need from my end.", "Btw I plan to release `datasets` 4.0 after your PR, this will be a major milestone :)", "@lhoestq just pushed the new changes.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs...
3,143,443,498
7,615
remove unused code
closed
true
2025-06-13T12:37:30Z
2025-06-13T12:39:59Z
2025-06-13T12:37:40Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7615
2025-06-13T12:37:40Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7615
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7615). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,143,381,638
7,614
Lazy column
closed
Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI e.g. `ds[col]` now returns a lazy Column instead of a list This way calling `ds[col][idx]` only loads the required data in memory (bonus: also supports subfields access with `ds[col][subcol][idx]`) the breaking change will be for the next major release, which also includes removal of dataset scripts support close https://github.com/huggingface/datasets/issues/4180
true
2025-06-13T12:12:57Z
2025-06-17T13:08:51Z
2025-06-17T13:08:49Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7614
2025-06-17T13:08:49Z
1
1
1
0
false
false
[]
https://github.com/huggingface/datasets/pull/7614
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7614). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,142,819,991
7,613
fix parallel push_to_hub in dataset_dict
closed
true
2025-06-13T09:02:24Z
2025-06-13T12:30:23Z
2025-06-13T12:30:22Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7613
2025-06-13T12:30:22Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7613
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7613). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,141,905,049
7,612
Provide an option of robust dataset iterator with error handling
open
### Feature request Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again. The way I try to do error handling is: (This doesn't work, unfortunately) ``` # Load the dataset with streaming enabled dataset = load_dataset( "pixparse/cc12m-wds", split="train", streaming=True ) # Get an iterator from the dataset iterator = iter(dataset) while True: try: # Try to get the next example example = next(iterator) # Try to access and process the image image = example["jpg"] pil_image = Image.fromarray(np.array(image)) pil_image.verify() # Verify it's a valid image file except StopIteration: # Code path 1 print("\nStopIteration was raised! Reach the end of dataset") raise StopIteration except Exception as e: # Code path 2 errors += 1 print("Error! Skip this sample") cotinue else: successful += 1 ``` This is because the `IterableDataset` already throws an error (reaches Code path 2). And if I continue call next(), it will hit Code path 1. This is because the inner iterator of `IterableDataset`([code](https://github.com/huggingface/datasets/blob/89bd1f971402acb62805ef110bc1059c38b1c8c6/src/datasets/iterable_dataset.py#L2242)) as been stopped, so calling next() on it will raise StopIteration. So I can not skip the corrupted data sample in this way. Would also love to hear any suggestions about creating a robust dataloader. Thanks for your help in advance! ### Motivation ## Public dataset corruption might be common A lot of users would use public dataset, and the public dataset might contains some corrupted data, especially for dataset with image / video etc. I totally understand it's dataset owner and user's responsibility to ensure the data integrity / run data cleaning or preprocessing, but it would be easier for developers who would use the dataset ## Use cases For example, a robust dataloader would be easy for users who want to try quick tests on different dataset, and chose one dataset which fits their needs. So user could use IterableDataloader with `stream=True` to use the dataset easily without downloading and removing corrupted data samples from the dataset. ### Your contribution The error handling might not trivial and might need more careful design.
true
2025-06-13T00:40:48Z
2025-06-24T16:52:30Z
null
wwwjn
NONE
null
null
2
1
1
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7612
false
[ "Hi ! Maybe we can add a parameter to the Image() type to make it to return `None` instead of raising an error in case of corruption ? Would that help ?", "Hi! 👋🏼 I just opened PR [#7638](https://github.com/huggingface/datasets/pull/7638) to address this issue.\n\n### 🔧 What it does:\nIt adds an `ignore_decode...
3,141,383,940
7,611
Code example for dataset.add_column() does not reflect correct way to use function
open
https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10 The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it.
true
2025-06-12T19:42:29Z
2025-06-12T19:42:29Z
null
shaily99
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7611
false
[]
3,141,281,560
7,610
i cant confirm email
open
### Describe the bug This is dificult, I cant confirm email because I'm not get any email! I cant post forum because I cant confirm email! I can send help desk because... no exist on web page. paragraph 44 ### Steps to reproduce the bug rthjrtrt ### Expected behavior ewtgfwetgf ### Environment info sdgfswdegfwe
true
2025-06-12T18:58:49Z
2025-06-12T18:58:49Z
null
lykamspam
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7610
false
[]
3,140,373,128
7,609
Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab`
closed
Not 100% about this one, but it seems to be recommended. ``` /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead. ``` Tests pass locally. And the warning is gone with this change. https://peps.python.org/pep-0626/#backwards-compatibility
true
2025-06-12T13:47:01Z
2025-06-16T12:14:10Z
2025-06-16T12:14:08Z
qgallouedec
MEMBER
https://github.com/huggingface/datasets/pull/7609
2025-06-16T12:14:08Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7609
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7609). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "not 100% sure either, I tried removing unnecessary checks - let me know if they sound g...
3,137,564,259
7,608
Tests typing and fixes for push_to_hub
closed
todo: - [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc
true
2025-06-11T17:13:52Z
2025-06-12T21:15:23Z
2025-06-12T21:15:21Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7608
2025-06-12T21:15:21Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7608
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,135,722,560
7,607
Video and audio decoding with torchcodec
closed
### Feature request Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video. ### Motivation My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extract the audio tensors directly from MP4 files. Also, I can easily resample video data to whatever fps I like on the fly. I haven't found an easy/efficient way to do this with torchvision. ### Your contribution I’m modifying the Video dataclass to use torchcodec in place of the current backend, starting from a stable commit for a project I’m working on. If it ends up working well, I’m happy to open a PR on main.
true
2025-06-11T07:02:30Z
2025-06-19T18:25:49Z
2025-06-19T18:25:49Z
TyTodd
CONTRIBUTOR
null
null
16
1
1
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7607
false
[ "Good idea ! let me know if you have any question or if I can help", "@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a V...
3,133,848,546
7,606
Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset)
closed
true
2025-06-10T14:35:10Z
2025-06-11T16:47:28Z
2025-06-11T16:47:25Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7606
2025-06-11T16:47:25Z
1
6
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7606
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,131,636,882
7,605
Make `push_to_hub` atomic (#7600)
closed
true
2025-06-09T22:29:38Z
2025-06-23T19:32:08Z
2025-06-23T19:32:08Z
sharvil
NONE
https://github.com/huggingface/datasets/pull/7605
null
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7605
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7605). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi ! unfortunately we can't allow atomic commits for commits with hundreds of files add...
3,130,837,169
7,604
Docs and more methods for IterableDataset: push_to_hub, to_parquet...
closed
to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list
true
2025-06-09T16:44:40Z
2025-06-10T13:15:23Z
2025-06-10T13:15:21Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7604
2025-06-10T13:15:21Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7604
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7604). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,130,394,563
7,603
No TF in win tests
closed
true
2025-06-09T13:56:34Z
2025-06-09T15:33:31Z
2025-06-09T15:33:30Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7603
2025-06-09T15:33:30Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7603
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7603). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,128,758,924
7,602
Enhance error handling and input validation across multiple modules
open
This PR improves the robustness and user experience by: 1. **Audio Module**: - Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding 2. **DatasetDict**: - Enhanced key access error messages to show available splits when an invalid key is accessed 3. **NonMutableDict**: - Added input validation for the update() method to ensure proper mapping types 4. **Arrow Reader**: - Improved error messages for small dataset percentage splits with suggestions for alternatives 5. **FaissIndex**: - Strengthened input validation with descriptive error messages - Added proper type checking and shape validation for search queries These changes make the code more maintainable and user-friendly by providing actionable feedback when issues arise.
true
2025-06-08T23:01:06Z
2025-06-08T23:01:06Z
null
mohiuddin-khan-shiam
NONE
https://github.com/huggingface/datasets/pull/7602
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7602
true
[]
3,127,296,182
7,600
`push_to_hub` is not concurrency safe (dataset schema corruption)
closed
### Describe the bug Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable. Consider this scenario: - we have an Arrow dataset - there are `N` configs of the dataset - there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`) - each process calls `push_to_hub` on their particular config when they're done processing - all calls to `push_to_hub` succeed - the `README.md` now has some configs with `new_col` added and some with `new_col` missing Any attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising). We have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand. Reading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded. ### Steps to reproduce the bug See above. ### Expected behavior Concurrent edits to disjoint configs of a dataset should never corrupt the dataset schema. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.2 - `fsspec` version: 2023.9.0
true
2025-06-07T17:28:56Z
2025-06-23T19:36:37Z
2025-06-23T19:36:37Z
sharvil
NONE
null
null
3
5
5
0
null
false
[]
https://github.com/huggingface/datasets/issues/7600
false
[ "@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.", "Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)", "Dropping this due to inactivity; we've implemented push_to_hub outside of HF...
3,125,620,119
7,599
My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl
closed
### Describe the bug Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being downloaded when using load_dataset. Can you please help? Thank you in advance. ### Steps to reproduce the bug from datasets import load_dataset ds = load_dataset("PRAIG/SMB") ds = ds["train"] ### Expected behavior It is expected to have all the metadata available in the jsonl file. Fields like: "score_id", "original_width", "original_height", "regions"... among others. ### Environment info datasets==3.6.0, python 3.13.3 (but he problem is already in the huggingface dataset page)
true
2025-06-06T18:59:00Z
2025-06-16T15:18:00Z
2025-06-16T15:18:00Z
JuanCarlosMartinezSevilla
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7599
false
[ "Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This make...
3,125,184,457
7,598
fix string_to_dict usage for windows
closed
true
2025-06-06T15:54:29Z
2025-06-06T16:12:22Z
2025-06-06T16:12:21Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7598
2025-06-06T16:12:21Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7598
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7598). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,123,962,709
7,597
Download datasets from a private hub in 2025
closed
### Feature request In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature. The obvious workaround is to clone the repo first and then load it from local storage, but this adds an extra step. It'd be great to have the same experience regardless of where the hub is hosted. This issue was raised before here: https://github.com/huggingface/datasets/issues/3679 @juliensimon ### Motivation none ### Your contribution none
true
2025-06-06T07:55:19Z
2025-06-13T13:46:00Z
2025-06-13T13:46:00Z
DanielSchuhmacher
NONE
null
null
2
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7597
false
[ "Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github...
3,122,595,042
7,596
Add albumentations to use dataset
closed
1. Fixed broken link to the list of transforms in torchvison. 2. Extended section about video image augmentations with an example from Albumentations.
true
2025-06-05T20:39:46Z
2025-06-17T18:38:08Z
2025-06-17T14:44:30Z
ternaus
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7596
2025-06-17T14:44:30Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7596
true
[ "@lhoestq ping", "@lhoestq ping", "@lhoestq Thanks. Cleaned up torchvision." ]
3,121,689,436
7,595
Add `IterableDataset.push_to_hub()`
closed
Basic implementation, which writes one shard per input dataset shard. This is to be improved later. Close https://github.com/huggingface/datasets/issues/5665 PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_hub(...)`
true
2025-06-05T15:29:32Z
2025-06-06T16:12:37Z
2025-06-06T16:12:36Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7595
2025-06-06T16:12:36Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7595
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,120,799,626
7,594
Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format)
open
### Feature request Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl). ### Motivation I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my data and it is too big for me to clean and save on my own hardware). I would like the option to just ignore this column when using `load_dataset`, since i don't need it. I tried to look if this is already possible but couldn't find a solution. if there is I would love some help. If it is not currently possible, I would love this feature ### Your contribution I don't think I can help this time, unfortunately.
true
2025-06-05T11:12:45Z
2025-06-05T12:58:12Z
null
avishaiElmakies
NONE
null
null
4
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7594
false
[ "Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest", "Is it possible to ignore columns when using parquet? ", "Yes, you can pass `columns=...` to load_dataset to select which c...
3,118,812,368
7,593
Fix broken link to albumentations
closed
A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links. In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format. Fix a few typos in the doc as well.
true
2025-06-04T19:00:13Z
2025-06-05T16:37:02Z
2025-06-05T16:36:32Z
ternaus
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7593
2025-06-05T16:36:32Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7593
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7593). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq ping" ]
3,118,203,880
7,592
Remove scripts altogether
closed
TODO: - [x] remplace fixtures based on script with no-script fixtures - [x] windaube
true
2025-06-04T15:14:11Z
2025-06-09T16:45:29Z
2025-06-09T16:45:27Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7592
2025-06-09T16:45:27Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7592
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7592). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,117,816,388
7,591
Add num_proc parameter to push_to_hub
open
### Feature request A number of processes parameter to the dataset.push_to_hub method ### Motivation Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
true
2025-06-04T13:19:15Z
2025-06-04T13:19:23Z
null
SwayStar123
NONE
null
null
0
1
1
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7591
false
[]
3,101,654,892
7,590
`Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema.
open
### Description When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error: ``` ArrowNotImplementedError: Unsupported cast from list<item: struct<id: string, data: string>> to struct using function cast_struct ``` This occurs even when the `features` schema is explicitly provided and the dataset format supports nested structures natively (e.g., JSON, JSONL). --- ### Minimal Reproduction [Colab Link.](https://colab.research.google.com/drive/1FZPQy6TP3jVd4B3mYKyfQaWNuOAvljUq?usp=sharing) #### Dataset ```python data = [ { "list": [ {"id": "example1", "data": "text"}, ] }, ] ``` #### Schema ```python from datasets import Features, Sequence, Value item = Features({ "id": Value("string"), "data": Value("string"), }) features = Features({ "list": Sequence(item), }) ``` --- ### Tested File Formats The same schema was tested across different formats: | Format | Method | Result | | --------- | --------------------------- | ------------------- | | JSONL | `load_dataset("json", ...)` | Arrow cast error | | JSON | `load_dataset("json", ...)` | Arrow cast error | | In-memory | `Dataset.from_list(...)` | Works as expected | The issue seems not to be in the schema or the data, but in how `load_dataset()` handles the `Sequence(Features(...))` pattern when parsing from files (specifically JSON and JSONL). --- ### Expected Behavior If `features` is explicitly defined as: ```python Features({"list": Sequence(Features({...}))}) ``` Then the data should load correctly across all backends — including from JSON and JSONL — without any Arrow casting errors. This works correctly when loading from memory via `Dataset.from_list`. --- ### Environment * `datasets`: 3.6.0 * `pyarrow`: 20.0.0 * Python: 3.12.10 * OS: Ubuntu 24.04.2 LTS * Notebook: \[Colab test notebook available] ---
true
2025-05-29T22:53:36Z
2025-06-04T13:13:08Z
null
AHS-uni
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7590
false
[ "Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the descripti...
3,101,119,704
7,589
feat: use content defined chunking
open
WIP: - [x] set the parameters in `io.parquet.ParquetDatasetReader` - [x] set the parameters in `arrow_writer.ParquetWriter` It requires a new pyarrow pin ">=21.0.0" which is not yet released.
true
2025-05-29T18:19:41Z
2025-06-17T15:04:07Z
null
kszucs
COLLABORATOR
https://github.com/huggingface/datasets/pull/7589
null
2
1
1
0
true
false
[]
https://github.com/huggingface/datasets/pull/7589
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Need to set `DEFAULT_MAX_BATCH_SIZE = 1024 * 1024`" ]
3,094,012,025
7,588
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
closed
### Describe the bug I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate). now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up ### Steps to reproduce the bug Imports: ```bash !pip install datasets huggingface_hub fsspec ``` Python code: ```python from datasets import load_dataset HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus" # Load the dataset try: if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME": raise ValueError( "Please provide a valid Hugging Face dataset name." ) dataset = load_dataset(HF_DATASET_NAME) # Omitted code as the error happens on the line above except ValueError as ve: print(f"Configuration Error: {ve}") raise except Exception as e: print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}") raise e ``` now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps ### Expected behavior loading the dataset successfully and perform splits (train, test, validation) ### Environment info from the imports, i do not install specific versions of these libraries, so the latest or available version is installed * `datasets` version: latest * `Platform`: Google Colab * `Hardware`: NVIDIA A100 GPU * `Python` version: latest * `huggingface_hub` version: latest * `fsspec` version: latest
true
2025-05-27T13:46:05Z
2025-05-30T13:22:52Z
2025-05-30T01:26:30Z
wkambale
NONE
null
null
5
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7588
false
[ "Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub ver...
3,091,834,987
7,587
load_dataset splits typing
closed
close https://github.com/huggingface/datasets/issues/7583
true
2025-05-26T18:28:40Z
2025-05-26T18:31:10Z
2025-05-26T18:29:57Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7587
2025-05-26T18:29:57Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7587
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,091,320,431
7,586
help is appreciated
open
### Feature request https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main ### Motivation ai model develpment and audio ### Your contribution ai model develpment and audio
true
2025-05-26T14:00:42Z
2025-05-26T18:21:57Z
null
rajasekarnp1
NONE
null
null
1
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7586
false
[ "how is this related to this repository ?" ]
3,091,227,921
7,585
Avoid multiple default config names
closed
Fix duplicating default config names. Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default. Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`: https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/arrow_dataset.py#L5757 https://github.com/huggingface/datasets/blob/da1db8a5b89fc0badaa0f571b36e122e52ae8c61/src/datasets/utils/metadata.py#L186-L188
true
2025-05-26T13:27:59Z
2025-06-05T12:41:54Z
2025-06-05T12:41:52Z
albertvillanova
MEMBER
https://github.com/huggingface/datasets/pull/7585
2025-06-05T12:41:52Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7585
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,090,255,023
7,584
Add LMDB format support
open
### Feature request Add LMDB format support for large memory-mapping files ### Motivation Add LMDB format support for large memory-mapping files ### Your contribution I'm trying to add it
true
2025-05-26T07:10:13Z
2025-05-26T18:23:37Z
null
trotsky1997
NONE
null
null
1
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7584
false
[ "Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?" ]
3,088,987,757
7,583
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
closed
### Describe the bug The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime. ### Steps to reproduce the bug 1. Use load_dataset with multiple splits e.g.: ``` from datasets import load_dataset ds_train, ds_val, ds_test = load_dataset( "Silly-Machine/TuPyE-Dataset", "binary", split=["train[:75%]", "train[75%:]", "test"] ) ``` 2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"` ### Expected behavior The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.7 - `huggingface_hub` version: 0.32.0 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
true
2025-05-25T02:33:18Z
2025-05-26T18:29:58Z
2025-05-26T18:29:58Z
hierr
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7583
false
[]
3,083,515,643
7,582
fix: Add embed_storage in Pdf feature
closed
Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image)
true
2025-05-22T14:06:29Z
2025-05-22T14:17:38Z
2025-05-22T14:17:36Z
AndreaFrancis
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7582
2025-05-22T14:17:36Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7582
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,083,080,413
7,581
Add missing property on `RepeatExamplesIterable`
closed
Fixes #7561
true
2025-05-22T11:41:07Z
2025-06-05T12:41:30Z
2025-06-05T12:41:29Z
SilvanCodes
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7581
2025-06-05T12:41:29Z
0
1
0
1
false
false
[]
https://github.com/huggingface/datasets/pull/7581
true
[]
3,082,993,027
7,580
Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False.
open
### Describe the bug When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call. This behavior leads to unnecessary bandwidth usage and longer download times, especially for large datasets, even if the user only intends to use a single split. ### Steps to reproduce the bug dataset_name = "skbose/indian-english-nptel-v0" dataset = load_dataset(dataset_name, token=hf_token, split="test") ### Expected behavior Optimize the download logic so that only the required split is downloaded when streaming=False when a specific split is provided. ### Environment info Dataset: skbose/indian-english-nptel-v0 Platform: M1 Apple Silicon Python verison: 3.12.9 datasets>=3.5.0
true
2025-05-22T11:08:16Z
2025-05-26T18:40:31Z
null
s3pi
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7580
false
[ "Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !" ]
3,081,849,022
7,579
Fix typos in PDF and Video documentation
closed
true
2025-05-22T02:27:40Z
2025-05-22T12:53:49Z
2025-05-22T12:53:47Z
AndreaFrancis
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7579
2025-05-22T12:53:47Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7579
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,080,833,740
7,577
arrow_schema is not compatible with list
closed
### Describe the bug ``` import datasets f = datasets.Features({'x': list[datasets.Value(dtype='int32')]}) f.arrow_schema Traceback (most recent call last): File "datasets/features/features.py", line 1826, in arrow_schema return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)}) ^^^^^^^^^ File "datasets/features/features.py", line 1815, in type return get_nested_type(self) ^^^^^^^^^^^^^^^^^^^^^ File "datasets/features/features.py", line 1252, in get_nested_type return pa.struct( ^^^^^^^^^^ File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type TypeError: DataType expected, got <class 'list'> ``` The following works ``` f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))}) ``` ### Expected behavior according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
true
2025-05-21T16:37:01Z
2025-05-26T18:49:51Z
2025-05-26T18:32:55Z
jonathanshen-upwork
NONE
null
null
3
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/7577
false
[ "Thanks for reporting, I'll look into it", "Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = dataset...
3,080,450,538
7,576
Fix regex library warnings
closed
# PR Summary This small PR resolves the regex library warnings showing starting Python3.11: ```python DeprecationWarning: 'count' is passed as positional argument ```
true
2025-05-21T14:31:58Z
2025-06-05T13:35:16Z
2025-06-05T12:37:55Z
emmanuel-ferdman
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7576
2025-06-05T12:37:55Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7576
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7576). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,080,228,718
7,575
[MINOR:TYPO] Update save_to_disk docstring
closed
r/hub/filesystem in save_to_disk
true
2025-05-21T13:22:24Z
2025-06-05T12:39:13Z
2025-06-05T12:39:13Z
cakiki
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7575
2025-06-05T12:39:13Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7575
true
[]
3,079,641,072
7,574
Missing multilingual directions in IWSLT2017 dataset's processing script
open
### Describe the bug Hi, Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`. Best Regards, Anand ### Steps to reproduce the bug Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`. ### Expected behavior The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use. I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.30.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
true
2025-05-21T09:53:17Z
2025-05-26T18:36:38Z
null
andy-joy-25
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7574
false
[ "I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue", "cool ! I pinged the owners of the dataset on HF to merge your PRs :)" ]
3,076,415,382
7,573
No Samsum dataset
closed
### Describe the bug https://huggingface.co/datasets/Samsung/samsum dataset not found error 404 Originated from https://github.com/meta-llama/llama-cookbook/issues/948 ### Steps to reproduce the bug go to website https://huggingface.co/datasets/Samsung/samsum see the error also downloading it with python throws ``` Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found) ``` ### Expected behavior Dataset exists ### Environment info ``` - `datasets` version: 3.2.0 - Platform: macOS-15.4.1-arm64-arm-64bit - Python version: 3.12.2 - `huggingface_hub` version: 0.26.5 - PyArrow version: 16.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0 ```
true
2025-05-20T09:54:35Z
2025-06-18T12:52:23Z
2025-06-18T12:52:23Z
IgorKasianenko
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7573
false
[ "According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n", "Thanks @SP1029 for the update!\nThat will work for now, using it as repla...
3,074,529,251
7,572
Fixed typos
closed
More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781).
true
2025-05-19T17:16:59Z
2025-06-05T12:25:42Z
2025-06-05T12:25:41Z
TopCoder2K
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7572
2025-06-05T12:25:41Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7572
true
[ "@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)" ]
3,074,116,942
7,571
fix string_to_dict test
closed
true
2025-05-19T14:49:23Z
2025-05-19T14:52:24Z
2025-05-19T14:49:28Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7571
2025-05-19T14:49:28Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7571
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,065,966,529
7,570
Dataset lib seems to broke after fssec lib update
closed
### Describe the bug I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec` ### Steps to reproduce the bug from datasets import load_dataset def download_hf(): dataset_name = input("Enter the dataset name: ") subset_name = input("Enter subset name: ") ds = load_dataset(dataset_name, name=subset_name) for split in ds: ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False) download_hf() ### Expected behavior ``` Downloading readme: 100%  1.55k/1.55k [00:00<00:00, 121kB/s] Downloading data files: 100%  1/1 [00:00<00:00,  2.06it/s] Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s] Downloading data: 100%|██████████| 54.2k/54.2k [00:00<00:00, 121kB/s] Extracting data files: 100%  1/1 [00:00<00:00, 35.17it/s] Generating test split:   140/0 [00:00<00:00, 2628.62 examples/s] --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) [<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>() 8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False) 9 ---> 10 download_hf() 2 frames [/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1171 is_local = not is_remote_filesystem(self._fs) 1172 if not is_local: -> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") 1174 if not os.path.exists(self._output_dir): 1175 raise FileNotFoundError( NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. ``` OR ``` Traceback (most recent call last): File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module> download_hf() File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf ds = load_dataset(dataset_name, name=subset_name) File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset builder_instance = load_dataset_builder( File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder dataset_module = dataset_module_factory( File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory raise e1 from None File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed. ``` ### Environment info colab and 3.10 local system
true
2025-05-15T11:45:06Z
2025-06-13T00:44:27Z
2025-06-13T00:44:27Z
sleepingcat4
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7570
false
[ "Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC", "@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally...
3,061,234,054
7,569
Dataset creation is broken if nesting a dict inside a dict inside a list
open
### Describe the bug Hey, I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details. Best, Tim ### Steps to reproduce the bug Runing this code: ```python from datasets import Dataset, Features, Sequence, Value def generator(): yield { "a": [{"b": {"c": 0}}], } features = Features( { "a": Sequence( feature={ "b": { "c": Value("int32"), }, }, length=1, ) } ) dataset = Dataset.from_generator(generator, features=features) ``` leads to ``` Generating train split: 1 examples [00:00, 540.85 examples/s] Traceback (most recent call last): File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single num_examples, num_bytes = writer.finalize() ^^^^^^^^^^^^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize self.write_examples_on_file() File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch pa_table = pa.Table.from_arrays(arrays, schema=schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast return call_function("cast", [arr], options, memory_pool) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/test/tools/hf_test2.py", line 23, in <module> dataset = Dataset.from_generator(generator, features=features) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator ).read() ^^^^^^ File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read self.builder.download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset Process finished with exit code 1 ``` ### Expected behavior I expected this code not to lead to an error. I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added): ```python def get_nested_type(schema: FeatureType, level=0) -> pa.DataType: """ get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of generate_from_arrow_type(). It performs double-duty as the implementation of Features.type and handles the conversion of datasets.Feature->pa.struct """ # Nested structures: we allow dict, list/tuples, sequences if isinstance(schema, Features): return pa.struct( {key: get_nested_type(schema[key], level = level + 1) for key in schema} ) # Features is subclass of dict, and dict order is deterministic since Python 3.6 elif isinstance(schema, dict): return pa.struct( {key: get_nested_type(schema[key], level = level + 1) for key in schema} ) # however don't sort on struct types since the order matters elif isinstance(schema, (list, tuple)): if len(schema) != 1: raise ValueError("When defining list feature, you should just provide one example of the inner type") value_type = get_nested_type(schema[0], level = level + 1) return pa.list_(value_type) elif isinstance(schema, LargeList): value_type = get_nested_type(schema.feature, level = level + 1) return pa.large_list(value_type) elif isinstance(schema, Sequence): value_type = get_nested_type(schema.feature, level = level + 1) # We allow to reverse list of dict => dict of list for compatibility with tfds if isinstance(schema.feature, dict) and level == 1: data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type}) else: data_type = pa.list_(value_type, schema.length) return data_type # Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods) return schema() ``` I have honestly no idea what I am doing here, so this might produce other issues for different inputs. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 Also tested it with 3.5.0, same result.
true
2025-05-13T21:06:45Z
2025-05-20T19:25:15Z
null
TimSchneider42
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7569
false
[ "Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```", "Hi,\n\nThanks for the swift reply! Could...
3,060,515,257
7,568
`IterableDatasetDict.map()` call removes `column_names` (in fact info.features)
open
When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relies on `info.features`, it ends up broken (`None`). **Reproduction** 1. Define an IterableDatasetDict with a non-None features schema. 2. my_iterable_dataset_dict contains "text" column. 3. Call: ```Python new_dict = my_iterable_dataset_dict.map( function=my_fn, with_indices=False, batched=True, batch_size=16, ) ``` 4. Observe ```Python new_dict["train"].info.features # {'text': Value(dtype='string', id=None)} new_dict["train"].column_names # ['text'] ``` 5. Call: ```Python new_dict = my_iterable_dataset_dict.map( function=my_fn, with_indices=False, batched=True, batch_size=16, remove_columns=["foo"] ) ``` 6. Observe: ```Python new_dict["train"].info.features # → None new_dict["train"].column_names # → None ``` 5. Internally, in dataset_dict.py this loop omits features ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/dataset_dict.py#L2047C5-L2056C14)): ```Python for split, dataset in self.items(): dataset_dict[split] = dataset.map( function=function, with_indices=with_indices, input_columns=input_columns, batched=batched, batch_size=batch_size, drop_last_batch=drop_last_batch, remove_columns=remove_columns, fn_kwargs=fn_kwargs, # features omitted → defaults to None ) ``` 7. Then inside IterableDataset.map() ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2619C1-L2622C37)) correct `info.features` is replaced by features which is None: ```Python info = self.info.copy() info.features = features # features is None here return IterableDataset(..., info=info, ...) ``` **Suggestion** It looks like this replacement was added intentionally but maybe should be done only if `features` is `not None`. **Workarround:** `SFTTrainer` calls `dataset.map()` several times and then fails on `NoneType` when iterating `dataset.column_names`. I decided to write this patch - works form me. ```python def patch_iterable_dataset_map(): _orig_map = IterableDataset.map def _patched_map(self, *args, **kwargs): if "features" not in kwargs or kwargs["features"] is None: kwargs["features"] = self.info.features return _orig_map(self, *args, **kwargs) IterableDataset.map = _patched_map ```
true
2025-05-13T15:45:42Z
2025-05-19T12:09:48Z
null
mombip
NONE
null
null
5
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7568
false
[ "Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the firs...
3,058,308,538
7,567
interleave_datasets seed with multiple workers
open
### Describe the bug Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers. Should the seed be modulated with the worker id? ### Steps to reproduce the bug See above ### Expected behavior See above ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.4.1-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
true
2025-05-12T22:38:27Z
2025-05-15T20:39:37Z
null
jonathanasdf
NONE
null
null
6
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7567
false
[ "Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?", "here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_i...
3,055,279,344
7,566
terminate called without an active exception; Aborted (core dumped)
open
### Describe the bug I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort. ### Steps to reproduce the bug 1. `pip install datasets` 2. ``` $ cat main.py #!/usr/bin/env python3 from datasets import load_dataset dataset = load_dataset('HuggingFaceFW/fineweb', split='train', streaming=True) print(next(iter(dataset))) ``` 3. `chmod +x main.py` ``` $ ./main.py README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 43.1k/43.1k [00:00<00:00, 7.04MB/s] Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:05<00:00, 4859.26it/s] Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:00<00:00, 54773.56it/s] {'text': "How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: “Found leg. Seriously.”\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\n“When weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\n“CLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\nThe byline? Robert Ray.\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\n© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%20jwashington@ap.org/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717} terminate called without an active exception Aborted (core dumped) ``` ### Expected behavior I'm not a proficient Python user, so it might be my own error, but even in that case, the error message should be better. ### Environment info `Successfully installed datasets-3.6.0 dill-0.3.8 hf-xet-1.1.0 huggingface-hub-0.31.1 multiprocess-0.70.16 requests-2.32.3 xxhash-3.5.0` ``` $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=22.04 DISTRIB_CODENAME=jammy DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS" ```
true
2025-05-11T23:05:54Z
2025-06-23T17:56:02Z
null
alexey-milovidov
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7566
false
[ "@alexey-milovidov I followed the code snippet, but am able to successfully execute without any error. Could you please verify if the error persists or there is any additional details.", "@alexey-milovidov else if the problem does not exist please feel free to close this issue.", "```\nmilovidov@milovidov-pc:~/...
3,051,731,207
7,565
add check if repo exists for dataset uploading
open
Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error: `Too many requests for https://huggingface.co/datasets/repo/create`. It seems that this issue occurs because the dataset tries to recreate itself every time a split is uploaded. To resolve this, I've added a check to ensure that if the dataset already exists, it won't attempt to recreate it.
true
2025-05-09T10:27:00Z
2025-06-09T14:39:23Z
null
Samoed
NONE
https://github.com/huggingface/datasets/pull/7565
null
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7565
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Can you review, please? I don't think that errors in CI are related to my chan...
3,049,275,226
7,564
Implementation of iteration over values of a column in an IterableDataset object
closed
Refers to [this issue](https://github.com/huggingface/datasets/issues/7381).
true
2025-05-08T14:59:22Z
2025-05-19T12:15:02Z
2025-05-19T12:15:02Z
TopCoder2K
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7564
2025-05-19T12:15:02Z
5
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7564
true
[ "A couple of questions:\r\n1. I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the `en_dataset`\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING...
3,046,351,253
7,563
set dev version
closed
true
2025-05-07T15:18:29Z
2025-05-07T15:21:05Z
2025-05-07T15:18:36Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7563
2025-05-07T15:18:36Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7563
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,046,339,430
7,562
release: 3.6.0
closed
true
2025-05-07T15:15:13Z
2025-05-07T15:17:46Z
2025-05-07T15:15:21Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7562
2025-05-07T15:15:20Z
1
1
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7562
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,046,302,653
7,561
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
closed
### Describe the bug When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR. ### Steps to reproduce the bug 1. Create an `IterableDataset`. 2. Call `.repeat(None)` on it. 3. Wrap it in a pytorch `DataLoader` 4. Iterate over it. ### Expected behavior This should work normally. ### Environment info datasets: 3.5.0
true
2025-05-07T15:05:42Z
2025-06-05T12:41:30Z
2025-06-05T12:41:30Z
cyanic-selkie
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7561
false
[]
3,046,265,500
7,560
fix decoding tests
closed
true
2025-05-07T14:56:14Z
2025-05-07T14:59:02Z
2025-05-07T14:56:20Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7560
2025-05-07T14:56:20Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7560
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,046,177,078
7,559
fix aiohttp import
closed
true
2025-05-07T14:31:32Z
2025-05-07T14:34:34Z
2025-05-07T14:31:38Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7559
2025-05-07T14:31:38Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7559
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,046,066,628
7,558
fix regression
closed
reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition) wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead
true
2025-05-07T13:56:03Z
2025-05-07T13:58:52Z
2025-05-07T13:56:18Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7558
2025-05-07T13:56:18Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7558
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,045,962,076
7,557
check for empty _formatting
closed
Fixes a regression from #7553 breaking shuffling of iterable datasets <img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
true
2025-05-07T13:22:37Z
2025-05-07T13:57:12Z
2025-05-07T13:57:12Z
winglian
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7557
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7557
true
[ "Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind" ]
3,043,615,210
7,556
Add `--merge-pull-request` option for `convert_to_parquet`
open
Closes #7527 Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details.
true
2025-05-06T18:05:05Z
2025-05-07T17:41:16Z
null
klamike
NONE
https://github.com/huggingface/datasets/pull/7556
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7556
true
[ "This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts." ]
3,043,089,844
7,554
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
closed
### Describe the bug `datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this. ### Steps to reproduce the bug See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing) Or: ```python from datasets import load_dataset dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True) ``` ### Expected behavior I expected only the `test_synth` split to be downloaded and processed. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.12 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
true
2025-05-06T14:43:38Z
2025-05-07T14:53:45Z
2025-05-07T14:53:44Z
sei-eschwartz
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7554
false
[ "Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets ...
3,042,953,907
7,553
Rebatch arrow iterables before formatted iterable
closed
close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475
true
2025-05-06T13:59:58Z
2025-05-07T13:17:41Z
2025-05-06T14:03:42Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7553
2025-05-06T14:03:41Z
2
1
1
0
false
false
[]
https://github.com/huggingface/datasets/pull/7553
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq Our CI found an issue with this changeset causing a regression with shuffling ...
3,040,258,084
7,552
Enable xet in push to hub
closed
follows https://github.com/huggingface/huggingface_hub/pull/3035 related to https://github.com/huggingface/datasets/issues/7526
true
2025-05-05T17:02:09Z
2025-05-06T12:42:51Z
2025-05-06T12:42:48Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7552
2025-05-06T12:42:48Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7552
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,038,114,928
7,551
Issue with offline mode and partial dataset cached
open
### Describe the bug Hi, a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards ### Steps to reproduce the bug ```python import os # os.environ["HF_HUB_OFFLINE"] = "1" os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx" import datasets dataset_name = "uonlp/CulturaX" data_files = "fr/fr_part_00038.parquet" ds = datasets.load_dataset(dataset_name, split='train', data_files=data_files) print(f"Dataset loaded : {ds}") ``` Once the file has been cached, I rerun with the HF_HUB_OFFLINE activated an get this error : ``` ValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e' Available configs in the cache: ['default-2935e8cdcc21c613'] ``` ### Expected behavior Should be able to access the previously cached files ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.4.0-215-generic-x86_64-with-glibc2.31 - Python version: 3.12.0 - `huggingface_hub` version: 0.27.0 - PyArrow version: 19.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
true
2025-05-04T16:49:37Z
2025-05-13T03:18:43Z
null
nrv
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7551
false
[ "It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8...
3,037,017,367
7,550
disable aiohttp depend for python 3.13t free-threading compat
closed
true
2025-05-03T00:28:18Z
2025-05-03T00:28:24Z
2025-05-03T00:28:24Z
Qubitium
NONE
https://github.com/huggingface/datasets/pull/7550
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7550
true
[]
3,036,272,015
7,549
TypeError: Couldn't cast array of type string to null on webdataset format dataset
open
### Describe the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` got ``` File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 255, in pyarrow.lib.array File "pyarrow/array.pxi", line 117, in pyarrow.lib._handle_arrow_array_protocol File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 258, in __arrow_array__ out = cast_array_to_feature( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2006, in cast_array_to_feature arrays = [ File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2007, in <listcomp> _c(array.field(name) if name in array_fields else null_array, subfeature) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2066, in cast_array_to_feature casted_array_values = _c(array.values, feature.feature) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2103, in cast_array_to_feature return array_cast( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper return func(array, *args, **kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1949, in array_cast raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") TypeError: Couldn't cast array of type string to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/load.py", line 2084, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 925, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare super()._download_and_prepare( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1001, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1487, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset ``` `datasets==3.5.1` whats wrong its inner json structure is like ```yaml features: - name: "image" dtype: "image" - name: "json.id" dtype: "string" - name: "json.width" dtype: "int32" - name: "json.height" dtype: "int32" - name: "json.rating" sequence: dtype: "string" - name: "json.general_tags" sequence: dtype: "string" - name: "json.character_tags" sequence: dtype: "string" ``` i'm 100% sure all the jsons satisfies the abovementioned format. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k") ``` ### Expected behavior load the dataset successfully, with the abovementioned json format and webp images ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 3.5.1 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - `huggingface_hub` version: 0.30.2 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
true
2025-05-02T15:18:07Z
2025-05-02T15:37:05Z
null
narugo1992
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7549
false
[ "seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n ...
3,035,568,851
7,548
Python 3.13t (free threads) Compat
open
### Describe the bug Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python. The `free threading` support issue in `aiothttp` is active since August 2024! Ouch. https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784 `pip install dataset` ```bash (vm313t) root@gpu-base:~/GPTQModel# pip install datasets WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/datasets/ Collecting datasets Using cached datasets-3.5.1-py3-none-any.whl.metadata (19 kB) Requirement already satisfied: filelock in /root/vm313t/lib/python3.13t/site-packages (from datasets) (3.18.0) Requirement already satisfied: numpy>=1.17 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.2.5) Collecting pyarrow>=15.0.0 (from datasets) Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Collecting dill<0.3.9,>=0.3.0 (from datasets) Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB) Collecting pandas (from datasets) Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB) Requirement already satisfied: requests>=2.32.2 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.32.3) Requirement already satisfied: tqdm>=4.66.3 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (4.67.1) Collecting xxhash (from datasets) Using cached xxhash-3.5.0-cp313-cp313t-linux_x86_64.whl Collecting multiprocess<0.70.17 (from datasets) Using cached multiprocess-0.70.16-py312-none-any.whl.metadata (7.2 kB) Collecting fsspec<=2025.3.0,>=2023.1.0 (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets) Using cached fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB) Collecting aiohttp (from datasets) Using cached aiohttp-3.11.18.tar.gz (7.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: huggingface-hub>=0.24.0 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (0.30.2) Requirement already satisfied: packaging in /root/vm313t/lib/python3.13t/site-packages (from datasets) (25.0) Requirement already satisfied: pyyaml>=5.1 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (6.0.2) Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->datasets) Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB) Collecting aiosignal>=1.1.2 (from aiohttp->datasets) Using cached aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB) Collecting attrs>=17.3.0 (from aiohttp->datasets) Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB) Collecting frozenlist>=1.1.1 (from aiohttp->datasets) Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB) Collecting multidict<7.0,>=4.5 (from aiohttp->datasets) Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.3 kB) Collecting propcache>=0.2.0 (from aiohttp->datasets) Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB) Collecting yarl<2.0,>=1.17.0 (from aiohttp->datasets) Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (72 kB) Requirement already satisfied: idna>=2.0 in /root/vm313t/lib/python3.13t/site-packages (from yarl<2.0,>=1.17.0->aiohttp->datasets) (3.10) Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/vm313t/lib/python3.13t/site-packages (from huggingface-hub>=0.24.0->datasets) (4.13.2) Requirement already satisfied: charset-normalizer<4,>=2 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (3.4.1) Requirement already satisfied: urllib3<3,>=1.21.1 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2.4.0) Requirement already satisfied: certifi>=2017.4.17 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2025.4.26) Collecting python-dateutil>=2.8.2 (from pandas->datasets) Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB) Collecting pytz>=2020.1 (from pandas->datasets) Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB) Collecting tzdata>=2022.7 (from pandas->datasets) Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB) Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas->datasets) Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB) Using cached datasets-3.5.1-py3-none-any.whl (491 kB) Using cached dill-0.3.8-py3-none-any.whl (116 kB) Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB) Using cached multiprocess-0.70.16-py312-none-any.whl (146 kB) Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (220 kB) Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (404 kB) Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB) Using cached aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB) Using cached attrs-25.3.0-py3-none-any.whl (63 kB) Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (385 kB) Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (282 kB) Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl (42.2 MB) Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.9 MB) Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB) Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB) Using cached six-1.17.0-py2.py3-none-any.whl (11 kB) Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB) Building wheels for collected packages: aiohttp Building wheel for aiohttp (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for aiohttp (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [156 lines of output] ********************* * Accelerated build * ********************* /tmp/pip-build-env-wjqi8_7w/overlay/lib/python3.13t/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated. !! ******************************************************************************** Please consider removing the following classifiers in favor of a SPDX license expression: License :: OSI Approved :: Apache Software License See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details. ******************************************************************************** !! self._finalize_license_expression() running bdist_wheel running build running build_py creating build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/compression_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-313t/aiohttp creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/models.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_c.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_py.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket running egg_info writing aiohttp.egg-info/PKG-INFO writing dependency_links to aiohttp.egg-info/dependency_links.txt writing requirements to aiohttp.egg-info/requires.txt writing top-level names to aiohttp.egg-info/top_level.txt reading manifest file 'aiohttp.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'aiohttp' anywhere in distribution warning: no files found matching '*.pyi' anywhere in distribution warning: no previously-included files matching '*.pyc' found anywhere in distribution warning: no previously-included files matching '*.pyd' found anywhere in distribution warning: no previously-included files matching '*.so' found anywhere in distribution warning: no previously-included files matching '*.lib' found anywhere in distribution warning: no previously-included files matching '*.dll' found anywhere in distribution warning: no previously-included files matching '*.a' found anywhere in distribution warning: no previously-included files matching '*.obj' found anywhere in distribution warning: no previously-included files found matching 'aiohttp/*.html' no previously-included directories found matching 'docs/_build' adding license file 'LICENSE.txt' writing manifest file 'aiohttp.egg-info/SOURCES.txt' copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-313t/aiohttp creating build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash copying aiohttp/_websocket/mask.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/mask.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket copying aiohttp/_websocket/reader_c.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/mask.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/mask.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash copying aiohttp/_websocket/.hash/reader_c.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash running build_ext building 'aiohttp._websocket.mask' extension creating build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fPIC -I/root/vm313t/include -I/usr/include/python3.13t -c aiohttp/_websocket/mask.c -o build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket/mask.o aiohttp/_websocket/mask.c:1864:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 1864 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c: In function ‘__pyx_f_7aiohttp_10_websocket_4mask__websocket_mask_cython’: aiohttp/_websocket/mask.c:2905:3: warning: ‘Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations] 2905 | if (unlikely(__pyx_assertions_enabled())) { | ^~ In file included from /usr/include/python3.13t/Python.h:76, from aiohttp/_websocket/mask.c:16: /usr/include/python3.13t/cpython/pydebug.h:13:37: note: declared here 13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag; | ^~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c: At top level: aiohttp/_websocket/mask.c:4846:69: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 4846 | static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c:4891:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 4891 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c: In function ‘__Pyx_CyFunction_CallAsMethod’: aiohttp/_websocket/mask.c:5580:6: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’? 5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); | ^~~~~~~~~~~~~~~~~~~~ | vectorcallfunc aiohttp/_websocket/mask.c:1954:45: warning: initialization of ‘int’ from ‘vectorcallfunc’ {aka ‘struct _object * (*)(struct _object *, struct _object * const*, long unsigned int, struct _object *)’} makes integer from pointer without a cast [-Wint-conversion] 1954 | #define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) | ^ aiohttp/_websocket/mask.c:5580:32: note: in expansion of macro ‘__Pyx_CyFunction_func_vectorcall’ 5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c:5583:16: warning: implicit declaration of function ‘__Pyx_PyVectorcall_FastCallDict’ [-Wimplicit-function-declaration] 5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ aiohttp/_websocket/mask.c:5583:16: warning: returning ‘int’ from a function with return type ‘PyObject *’ {aka ‘struct _object *’} makes pointer from integer without a cast [-Wint-conversion] 5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Failed to build installable wheels for some pyproject.toml based projects (aiohttp) ``` ### Steps to reproduce the bug See above ### Expected behavior Install ### Environment info Ubuntu 24.04
true
2025-05-02T09:20:09Z
2025-05-12T15:11:32Z
null
Qubitium
NONE
null
null
7
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7548
false
[ "Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will...
3,034,830,291
7,547
Avoid global umask for setting file mode.
closed
This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the `temp_file` instead. This fixes https://github.com/huggingface/datasets/issues/7536.
true
2025-05-01T22:24:24Z
2025-05-06T13:05:00Z
2025-05-06T13:05:00Z
ryan-clancy
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7547
2025-05-06T13:05:00Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7547
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
3,034,018,298
7,546
Large memory use when loading large datasets to a ZFS pool
closed
### Describe the bug When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets. ### Steps to reproduce the bug `uv run --with datasets==3.5.1 python` ```python from datasets import load_dataset load_dataset('MLCommons/peoples_speech', 'clean') load_dataset('mozilla-foundation/common_voice_17_0', 'en') ``` ### Expected behavior I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded. ### Environment info I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
true
2025-05-01T14:43:47Z
2025-05-13T13:30:09Z
2025-05-13T13:29:53Z
FredHaa
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7546
false
[ "Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?", "Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](h...
3,031,617,547
7,545
Networked Pull Through Cache
open
### Feature request Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service. Enable a three-tier cache lookup for datasets: 1. Local on-disk cache 2. Configurable network cache proxy 3. Official Hugging Face Hub ### Motivation - Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets. - Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs. - Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency. - Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/ ### Your contribution I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype. I have limited bandwidth so I would be looking for collaborators if anyone else is interested.
true
2025-04-30T15:16:33Z
2025-04-30T15:16:33Z
null
wrmedford
NONE
null
null
0
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7545
false
[]