url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
3.53B
node_id
stringlengths
18
32
number
int64
1
7.82k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
stringdate
2020-04-14 10:18:02
2025-10-20 06:38:19
updated_at
stringdate
2020-04-27 16:04:17
2025-10-20 06:41:20
closed_at
stringlengths
3
25
author_association
stringclasses
4 values
type
float64
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/7722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7722/comments
https://api.github.com/repos/huggingface/datasets/issues/7722/events
https://github.com/huggingface/datasets/issues/7722
3,289,741,064
I_kwDODunzps7EFXcI
7,722
Out of memory even though using load_dataset(..., streaming=True)
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-08-04 14:41:55+00:00
2025-08-04 14:41:55+00:00
NaT
NONE
null
null
null
null
### Describe the bug I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom. ### Steps to reproduce the bug ``` ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True) for i,sample in enumerate(tqdm(ds)): target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav') try: sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate']) except Exception as e: print(f"Could not write audio {i} in ds: {e}") ``` ### Expected behavior I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same. ### Environment info Python 3.12.11 Ubuntu 24 datasets 4.0.0 and 3.6.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7722/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7721/comments
https://api.github.com/repos/huggingface/datasets/issues/7721/events
https://github.com/huggingface/datasets/issues/7721
3,289,426,104
I_kwDODunzps7EEKi4
7,721
Bad split error message when using percentages
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2025-08-04 13:20:25+00:00
2025-08-14 14:42:24+00:00
NaT
NONE
null
null
null
null
### Describe the bug Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps. When doing so, the library returns this error: raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}") ValueError: Bad split: train[0%:10%]. Available splits: ['train'] Edit: Same happens with a split like _train[:90000]_ ### Steps to reproduce the bug ``` for split in range(10): split_str = f"train[{split*10}%:{(split+1)*10}%]" print(f"Processing split {split_str}...") ds = load_dataset("user/dataset", split=split_str, streaming=True) ``` ### Expected behavior I'd expect the library to split my dataset in 10% steps. ### Environment info python 3.12.11 ubuntu 24 dataset 4.0.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7721/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7720/comments
https://api.github.com/repos/huggingface/datasets/issues/7720/events
https://github.com/huggingface/datasets/issues/7720
3,287,150,513
I_kwDODunzps7D7e-x
7,720
Datasets 4.0 map function causing column not found
{ "avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4", "events_url": "https://api.github.com/users/Darejkal/events{/privacy}", "followers_url": "https://api.github.com/users/Darejkal/followers", "following_url": "https://api.github.com/users/Darejkal/following{/other_user}", "gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Darejkal", "id": 55143337, "login": "Darejkal", "node_id": "MDQ6VXNlcjU1MTQzMzM3", "organizations_url": "https://api.github.com/users/Darejkal/orgs", "received_events_url": "https://api.github.com/users/Darejkal/received_events", "repos_url": "https://api.github.com/users/Darejkal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions", "type": "User", "url": "https://api.github.com/users/Darejkal", "user_view_type": "public" }
[]
open
false
null
[]
null
3
2025-08-03 12:52:34+00:00
2025-08-07 19:23:34+00:00
NaT
NONE
null
null
null
null
### Describe the bug Column returned after mapping is not found in new instance of the dataset. ### Steps to reproduce the bug Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration` ``` def compute_duration(x): return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]} def get_total_audio_length(dataset): data = dataset.map(compute_duration, num_proc=NUM_PROC) print(data) durations=data["duration"] total_seconds = sum(durations) return total_seconds ``` ### Expected behavior New datasets.Dataset instance should have new columns attached. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - `huggingface_hub` version: 0.33.2 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2023.12.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7720/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7719/comments
https://api.github.com/repos/huggingface/datasets/issues/7719/events
https://github.com/huggingface/datasets/issues/7719
3,285,928,491
I_kwDODunzps7D20or
7,719
Specify dataset columns types in typehint
{ "avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4", "events_url": "https://api.github.com/users/Samoed/events{/privacy}", "followers_url": "https://api.github.com/users/Samoed/followers", "following_url": "https://api.github.com/users/Samoed/following{/other_user}", "gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Samoed", "id": 36135455, "login": "Samoed", "node_id": "MDQ6VXNlcjM2MTM1NDU1", "organizations_url": "https://api.github.com/users/Samoed/orgs", "received_events_url": "https://api.github.com/users/Samoed/received_events", "repos_url": "https://api.github.com/users/Samoed/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Samoed/subscriptions", "type": "User", "url": "https://api.github.com/users/Samoed", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
2025-08-02 13:22:31+00:00
2025-08-02 13:22:31+00:00
NaT
NONE
null
null
null
null
### Feature request Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131 ### Motivation In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder ```python from typing import TypedDict from torch.utils.data import DataLoader class CorpusInput(TypedDict): title: list[str] body: list[str] class QueryInput(TypedDict): query: list[str] instruction: list[str] def queries_loader() -> DataLoader[QueryInput]: ... def corpus_loader() -> DataLoader[CorpusInput]: ... ``` But for datasets we can only specify columns in type in comments ```python from datasets import Dataset QueryDataset = Dataset """Query dataset should have `query` and `instructions` columns as `str` """ ``` ### Your contribution I can create draft implementation
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7719/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7718/comments
https://api.github.com/repos/huggingface/datasets/issues/7718/events
https://github.com/huggingface/datasets/pull/7718
3,284,221,177
PR_kwDODunzps6hvJ6R
7,718
add support for pyarrow string view in features
{ "avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4", "events_url": "https://api.github.com/users/onursatici/events{/privacy}", "followers_url": "https://api.github.com/users/onursatici/followers", "following_url": "https://api.github.com/users/onursatici/following{/other_user}", "gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/onursatici", "id": 5051569, "login": "onursatici", "node_id": "MDQ6VXNlcjUwNTE1Njk=", "organizations_url": "https://api.github.com/users/onursatici/orgs", "received_events_url": "https://api.github.com/users/onursatici/received_events", "repos_url": "https://api.github.com/users/onursatici/repos", "site_admin": false, "starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/onursatici/subscriptions", "type": "User", "url": "https://api.github.com/users/onursatici", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2025-08-01 14:58:39+00:00
2025-09-12 13:14:16+00:00
2025-09-12 13:13:24+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7718.diff", "html_url": "https://github.com/huggingface/datasets/pull/7718", "merged_at": "2025-09-12T13:13:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/7718.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7718" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7718/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7717/comments
https://api.github.com/repos/huggingface/datasets/issues/7717/events
https://github.com/huggingface/datasets/issues/7717
3,282,855,127
I_kwDODunzps7DrGTX
7,717
Cached dataset is not used when explicitly passing the cache_dir parameter
{ "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padmalcom", "id": 3961950, "login": "padmalcom", "node_id": "MDQ6VXNlcjM5NjE5NTA=", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "repos_url": "https://api.github.com/users/padmalcom/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "type": "User", "url": "https://api.github.com/users/padmalcom", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-08-01 07:12:41+00:00
2025-08-05 19:19:36+00:00
NaT
NONE
null
null
null
null
### Describe the bug Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter. ### Steps to reproduce the bug ``` from datasets import load_dataset, concatenate_datasets from huggingface_hub import snapshot_download def download_ds(name: str): snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache") def prepare_ds(): audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache") print(sfw_ds.features) if __name__ == '__main__': download_ds("openslr/librispeech_asr") prepare_ds() ``` ### Expected behavior I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory. ### Environment info Windows 11 datasets==4.0.0 Python 3.12.11
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7717/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7716/comments
https://api.github.com/repos/huggingface/datasets/issues/7716/events
https://github.com/huggingface/datasets/pull/7716
3,281,204,362
PR_kwDODunzps6hk4Mq
7,716
typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 17:14:45+00:00
2025-07-31 17:17:15+00:00
2025-07-31 17:14:51+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7716.diff", "html_url": "https://github.com/huggingface/datasets/pull/7716", "merged_at": "2025-07-31T17:14:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/7716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7716" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7716/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7715/comments
https://api.github.com/repos/huggingface/datasets/issues/7715/events
https://github.com/huggingface/datasets/pull/7715
3,281,189,955
PR_kwDODunzps6hk1CK
7,715
Docs: Use Image(mode="F") for PNG/JPEG depth maps
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 17:09:49+00:00
2025-07-31 17:12:23+00:00
2025-07-31 17:10:10+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7715.diff", "html_url": "https://github.com/huggingface/datasets/pull/7715", "merged_at": "2025-07-31T17:10:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/7715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7715" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7715/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7714/comments
https://api.github.com/repos/huggingface/datasets/issues/7714/events
https://github.com/huggingface/datasets/pull/7714
3,281,090,499
PR_kwDODunzps6hkfHj
7,714
fix num_proc=1 ci test
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 16:36:32+00:00
2025-07-31 16:39:03+00:00
2025-07-31 16:38:03+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7714.diff", "html_url": "https://github.com/huggingface/datasets/pull/7714", "merged_at": "2025-07-31T16:38:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/7714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7714" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7714/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7714/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7713/comments
https://api.github.com/repos/huggingface/datasets/issues/7713/events
https://github.com/huggingface/datasets/pull/7713
3,280,813,699
PR_kwDODunzps6hjik2
7,713
Update cli.mdx to refer to the new "hf" CLI
{ "avatar_url": "https://avatars.githubusercontent.com/u/1936278?v=4", "events_url": "https://api.github.com/users/evalstate/events{/privacy}", "followers_url": "https://api.github.com/users/evalstate/followers", "following_url": "https://api.github.com/users/evalstate/following{/other_user}", "gists_url": "https://api.github.com/users/evalstate/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/evalstate", "id": 1936278, "login": "evalstate", "node_id": "MDQ6VXNlcjE5MzYyNzg=", "organizations_url": "https://api.github.com/users/evalstate/orgs", "received_events_url": "https://api.github.com/users/evalstate/received_events", "repos_url": "https://api.github.com/users/evalstate/repos", "site_admin": false, "starred_url": "https://api.github.com/users/evalstate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/evalstate/subscriptions", "type": "User", "url": "https://api.github.com/users/evalstate", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 15:06:11+00:00
2025-07-31 16:37:56+00:00
2025-07-31 16:37:55+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7713.diff", "html_url": "https://github.com/huggingface/datasets/pull/7713", "merged_at": "2025-07-31T16:37:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/7713.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7713" }
Update to refer to `hf auth login`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7713/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7713/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7712/comments
https://api.github.com/repos/huggingface/datasets/issues/7712/events
https://github.com/huggingface/datasets/pull/7712
3,280,706,762
PR_kwDODunzps6hjLF5
7,712
Retry intermediate commits too
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 14:33:33+00:00
2025-07-31 14:37:43+00:00
2025-07-31 14:36:43+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7712.diff", "html_url": "https://github.com/huggingface/datasets/pull/7712", "merged_at": "2025-07-31T14:36:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/7712.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7712" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7712/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7712/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7711/comments
https://api.github.com/repos/huggingface/datasets/issues/7711/events
https://github.com/huggingface/datasets/pull/7711
3,280,471,353
PR_kwDODunzps6hiXm0
7,711
Update dataset_dict push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 13:25:03+00:00
2025-07-31 14:18:55+00:00
2025-07-31 14:18:53+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7711.diff", "html_url": "https://github.com/huggingface/datasets/pull/7711", "merged_at": "2025-07-31T14:18:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/7711.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7711" }
following https://github.com/huggingface/datasets/pull/7708
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7711/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7710/comments
https://api.github.com/repos/huggingface/datasets/issues/7710/events
https://github.com/huggingface/datasets/pull/7710
3,279,878,230
PR_kwDODunzps6hgXxW
7,710
Concurrent IterableDataset push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-31 10:11:31+00:00
2025-07-31 10:14:00+00:00
2025-07-31 10:12:52+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7710.diff", "html_url": "https://github.com/huggingface/datasets/pull/7710", "merged_at": "2025-07-31T10:12:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/7710.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7710" }
Same as https://github.com/huggingface/datasets/pull/7708 but for `IterableDataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7710/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7710/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7709/comments
https://api.github.com/repos/huggingface/datasets/issues/7709/events
https://github.com/huggingface/datasets/issues/7709
3,276,677,990
I_kwDODunzps7DTiNm
7,709
Release 4.0.0 breaks usage patterns of with_format
{ "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4", "events_url": "https://api.github.com/users/wittenator/events{/privacy}", "followers_url": "https://api.github.com/users/wittenator/followers", "following_url": "https://api.github.com/users/wittenator/following{/other_user}", "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wittenator", "id": 9154515, "login": "wittenator", "node_id": "MDQ6VXNlcjkxNTQ1MTU=", "organizations_url": "https://api.github.com/users/wittenator/orgs", "received_events_url": "https://api.github.com/users/wittenator/received_events", "repos_url": "https://api.github.com/users/wittenator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions", "type": "User", "url": "https://api.github.com/users/wittenator", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-30 11:34:53+00:00
2025-08-07 08:27:18+00:00
2025-08-07 08:27:18+00:00
NONE
null
null
null
null
### Describe the bug Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet. ### Steps to reproduce the bug Steps to reproduce: ``` from datasets import load_dataset dataset = load_dataset("lhoestq/demo1") dataset = dataset.with_format("numpy") print(dataset["star"].ndim) ``` ### Expected behavior Working on whole columns should be possible. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36 - Python version: 3.12.11 - `huggingface_hub` version: 0.34.3 - PyArrow version: 21.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4", "events_url": "https://api.github.com/users/wittenator/events{/privacy}", "followers_url": "https://api.github.com/users/wittenator/followers", "following_url": "https://api.github.com/users/wittenator/following{/other_user}", "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wittenator", "id": 9154515, "login": "wittenator", "node_id": "MDQ6VXNlcjkxNTQ1MTU=", "organizations_url": "https://api.github.com/users/wittenator/orgs", "received_events_url": "https://api.github.com/users/wittenator/received_events", "repos_url": "https://api.github.com/users/wittenator/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions", "type": "User", "url": "https://api.github.com/users/wittenator", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7709/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7708/comments
https://api.github.com/repos/huggingface/datasets/issues/7708/events
https://github.com/huggingface/datasets/pull/7708
3,273,614,584
PR_kwDODunzps6hLVip
7,708
Concurrent push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-29 13:14:30+00:00
2025-07-31 10:00:50+00:00
2025-07-31 10:00:49+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7708.diff", "html_url": "https://github.com/huggingface/datasets/pull/7708", "merged_at": "2025-07-31T10:00:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/7708.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7708" }
Retry the step that (download + update + upload) the README.md using `create_commit(..., parent_commit=...)` if there was a commit in the meantime. This should enable concurrent `push_to_hub()` since it won't overwrite the README.md metadata anymore. Note: we fixed an issue server side to make this work: <details> DO NOT MERGE FOR NOW since it seems there is one bug that prevents this logic from working: I'm using parent_commit to enable concurrent push_to_hub() in datasets for a retry mechanism, but for some reason I always run into a weird situation. Sometimes create_commit(.., parent_commit=...) returns error 500 but the commit did happen on the Hub side without respecting parent_commit e.g. request id ``` huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/lhoestq/tmp/commit/main (Request ID: Root=1-6888d8af-2ce517bc60c69cb378b51526;d1b17993-c5d0-4ccd-9926-060c45f9ed61) ``` fix coming in [internal](https://github.com/huggingface-internal/moon-landing/pull/14617) </details> close https://github.com/huggingface/datasets/issues/7600
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7708/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7708/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7707/comments
https://api.github.com/repos/huggingface/datasets/issues/7707/events
https://github.com/huggingface/datasets/issues/7707
3,271,867,998
I_kwDODunzps7DBL5e
7,707
load_dataset() in 4.0.0 failed when decoding audio
{ "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiqing-feng", "id": 107918818, "login": "jiqing-feng", "node_id": "U_kgDOBm614g", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "type": "User", "url": "https://api.github.com/users/jiqing-feng", "user_view_type": "public" }
[]
closed
false
null
[]
null
16
2025-07-29 03:25:03+00:00
2025-10-05 06:41:38+00:00
2025-08-01 05:15:45+00:00
NONE
null
null
null
null
### Describe the bug Cannot decode audio data. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") print(dataset[0]["audio"]["array"]) ``` 1st round run, got ``` File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example raise ImportError("To support decoding audio data, please install 'torchcodec'.") ImportError: To support decoding audio data, please install 'torchcodec'. ``` After `pip install torchcodec` and run, got ``` File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module> from torchcodec._core.ops import ( File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module> load_torchcodec_shared_libraries() File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries raise RuntimeError( RuntimeError: Could not load libtorchcodec. Likely causes: 1. FFmpeg is not properly installed in your environment. We support versions 4, 5, 6 and 7. 2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with this version of TorchCodec. Refer to the version compatibility table: https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec. 3. Another runtime dependency; see exceptions below. The following exceptions were raised as we tried to load libtorchcodec: [start of libtorchcodec loading traceback] FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory [end of libtorchcodec loading traceback]. ``` After `apt update && apt install ffmpeg -y`, got ``` Traceback (most recent call last): File "/workspace/jiqing/test_datasets.py", line 4, in <module> print(dataset[0]["audio"]["array"]) ~~~~~~~^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem formatted_output = format_table( ^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__ self._decoder = create_decoder(source=source, seek_mode="approximate") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder return core.create_from_bytes(source, seek_mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes return create_from_tensor(buffer, seek_mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__ return self._op(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher]. Meta: registered at /dev/null:214 [kernel] BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel] FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback] Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback] Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback] Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback] ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback] ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback] AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback] AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback] AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback] AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback] AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback] AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback] AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback] AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback] AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback] AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback] AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback] Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback] AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback] AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback] AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback] AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback] AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback] AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback] FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback] BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback] FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback] Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback] VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback] PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback] FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback] PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback] PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback] ``` ### Expected behavior The result is ``` [0.00238037 0.0020752 0.00198364 ... 0.00042725 0.00057983 0.0010376 ] ``` on `datasets==3.6.0` ### Environment info [NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3` ``` - `datasets` version: 4.0.0 - Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.34.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiqing-feng", "id": 107918818, "login": "jiqing-feng", "node_id": "U_kgDOBm614g", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "type": "User", "url": "https://api.github.com/users/jiqing-feng", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7707/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7706/comments
https://api.github.com/repos/huggingface/datasets/issues/7706/events
https://github.com/huggingface/datasets/pull/7706
3,271,129,240
PR_kwDODunzps6hC5uD
7,706
Reimplemented partial split download support (revival of #6832)
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2025-07-28 19:40:40+00:00
2025-09-04 10:55:57+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7706.diff", "html_url": "https://github.com/huggingface/datasets/pull/7706", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7706.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7706" }
(revival of #6832) https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 Close https://github.com/huggingface/datasets/issues/4101, and more --- ### PR under work!!!!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7706/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7705/comments
https://api.github.com/repos/huggingface/datasets/issues/7705/events
https://github.com/huggingface/datasets/issues/7705
3,269,070,499
I_kwDODunzps7C2g6j
7,705
Can Not read installed dataset in dataset.load(.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4", "events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}", "followers_url": "https://api.github.com/users/HuangChiEn/followers", "following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}", "gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HuangChiEn", "id": 52521165, "login": "HuangChiEn", "node_id": "MDQ6VXNlcjUyNTIxMTY1", "organizations_url": "https://api.github.com/users/HuangChiEn/orgs", "received_events_url": "https://api.github.com/users/HuangChiEn/received_events", "repos_url": "https://api.github.com/users/HuangChiEn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions", "type": "User", "url": "https://api.github.com/users/HuangChiEn", "user_view_type": "public" }
[]
open
false
null
[]
null
3
2025-07-28 09:43:54+00:00
2025-08-05 01:24:32+00:00
NaT
NONE
null
null
null
null
Hi, folks, I'm newbie in huggingface dataset api. As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset. code snippet : <img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" /> data path : "/xxx/joseph/llava_ds/vlm_ds" it contains all video clips i want! <img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" /> i run the py script by <img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" /> But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side : <img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" /> Any suggestion will be appreciated!!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7705/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7704/comments
https://api.github.com/repos/huggingface/datasets/issues/7704/events
https://github.com/huggingface/datasets/pull/7704
3,265,730,177
PR_kwDODunzps6gwtb8
7,704
Fix map() example in datasets documentation: define tokenizer before use
{ "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4", "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}", "followers_url": "https://api.github.com/users/Sanjaykumar030/followers", "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}", "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sanjaykumar030", "id": 183703408, "login": "Sanjaykumar030", "node_id": "U_kgDOCvMXcA", "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs", "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events", "repos_url": "https://api.github.com/users/Sanjaykumar030/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions", "type": "User", "url": "https://api.github.com/users/Sanjaykumar030", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-26 14:18:17+00:00
2025-08-13 13:23:18+00:00
2025-08-13 13:06:37+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7704.diff", "html_url": "https://github.com/huggingface/datasets/pull/7704", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7704" }
## Problem The current datasets.Dataset.map() example in the documentation demonstrates batched processing using a tokenizer object without defining or importing it. This causes a NameError when users copy and run the example as-is, breaking the expected seamless experience. ## Correction This PR fixes the issue by explicitly importing and initializing the tokenizer using the Transformers library (AutoTokenizer.from_pretrained("bert-base-uncased")), making the example self-contained and runnable without errors. This will help new users understand the workflow and apply the method correctly. Closes #7703
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7704/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7704/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7703/comments
https://api.github.com/repos/huggingface/datasets/issues/7703/events
https://github.com/huggingface/datasets/issues/7703
3,265,648,942
I_kwDODunzps7Cpdku
7,703
[Docs] map() example uses undefined `tokenizer` — causes NameError
{ "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4", "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}", "followers_url": "https://api.github.com/users/Sanjaykumar030/followers", "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}", "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sanjaykumar030", "id": 183703408, "login": "Sanjaykumar030", "node_id": "U_kgDOCvMXcA", "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs", "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events", "repos_url": "https://api.github.com/users/Sanjaykumar030/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions", "type": "User", "url": "https://api.github.com/users/Sanjaykumar030", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-26 13:35:11+00:00
2025-07-27 09:44:35+00:00
NaT
CONTRIBUTOR
null
null
null
null
## Description The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied. Here is the problematic line: ```python # process a batch of examples >>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True) ``` This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples. ## Problem Users who copy and run the example as-is will encounter: ```python NameError: name 'tokenizer' is not defined ``` This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly. ## Proposal Update the example to include the required tokenizer setup using the Transformers library, like so: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True) ``` This will help new users understand the workflow and apply the method correctly. ## Note This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7703/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7702/comments
https://api.github.com/repos/huggingface/datasets/issues/7702/events
https://github.com/huggingface/datasets/pull/7702
3,265,328,549
PR_kwDODunzps6gvdYC
7,702
num_proc=0 behave like None, num_proc=1 uses one worker (not main process) and clarify num_proc documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4", "events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}", "followers_url": "https://api.github.com/users/tanuj-rai/followers", "following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}", "gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tanuj-rai", "id": 84439872, "login": "tanuj-rai", "node_id": "MDQ6VXNlcjg0NDM5ODcy", "organizations_url": "https://api.github.com/users/tanuj-rai/orgs", "received_events_url": "https://api.github.com/users/tanuj-rai/received_events", "repos_url": "https://api.github.com/users/tanuj-rai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions", "type": "User", "url": "https://api.github.com/users/tanuj-rai", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2025-07-26 08:19:39+00:00
2025-07-31 14:52:33+00:00
2025-07-31 14:52:33+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7702.diff", "html_url": "https://github.com/huggingface/datasets/pull/7702", "merged_at": "2025-07-31T14:52:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7702.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7702" }
Fixes issue #7700 This PR makes num_proc=0 behave like None in Dataset.map(), disabling multiprocessing. It improves UX by aligning with DataLoader(num_workers=0) behavior. The num_proc docstring is also updated to clearly explain valid values and behavior. @SunMarc
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7702/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7702/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7701/comments
https://api.github.com/repos/huggingface/datasets/issues/7701/events
https://github.com/huggingface/datasets/pull/7701
3,265,236,296
PR_kwDODunzps6gvJ83
7,701
Update fsspec max version to current release 2025.7.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/5445560?v=4", "events_url": "https://api.github.com/users/rootAvish/events{/privacy}", "followers_url": "https://api.github.com/users/rootAvish/followers", "following_url": "https://api.github.com/users/rootAvish/following{/other_user}", "gists_url": "https://api.github.com/users/rootAvish/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rootAvish", "id": 5445560, "login": "rootAvish", "node_id": "MDQ6VXNlcjU0NDU1NjA=", "organizations_url": "https://api.github.com/users/rootAvish/orgs", "received_events_url": "https://api.github.com/users/rootAvish/received_events", "repos_url": "https://api.github.com/users/rootAvish/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rootAvish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rootAvish/subscriptions", "type": "User", "url": "https://api.github.com/users/rootAvish", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2025-07-26 06:47:59+00:00
2025-08-13 17:32:07+00:00
2025-07-28 11:58:11+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7701.diff", "html_url": "https://github.com/huggingface/datasets/pull/7701", "merged_at": "2025-07-28T11:58:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/7701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7701" }
Diffusers currently asks for a max fsspec version of `2025.3.0`. This change updates it to the current latest version. This change is mainly required to resolve conflicts with other packages in an environment. In my particular case, `aider-chat` which is a part of my environment installs `2025.5.1` which is incompatible with `datasets`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7701/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7701/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7700/comments
https://api.github.com/repos/huggingface/datasets/issues/7700/events
https://github.com/huggingface/datasets/issues/7700
3,263,922,255
I_kwDODunzps7Ci4BP
7,700
[doc] map.num_proc needs clarification
{ "avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4", "events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}", "followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers", "following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}", "gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sfc-gh-sbekman", "id": 196988264, "login": "sfc-gh-sbekman", "node_id": "U_kgDOC73NaA", "organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs", "received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events", "repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions", "type": "User", "url": "https://api.github.com/users/sfc-gh-sbekman", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-07-25 17:35:09+00:00
2025-07-25 17:39:36+00:00
NaT
NONE
null
null
null
null
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc ``` num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached shards are loaded sequentially. ``` for batch: ``` num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no multiprocessing is used. This can significantly speed up batching for large datasets. ``` So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used? Let's update the doc to be unambiguous. **bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`. context: debugging a failing `map` Thank you!
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7700/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7699/comments
https://api.github.com/repos/huggingface/datasets/issues/7699/events
https://github.com/huggingface/datasets/issues/7699
3,261,053,171
I_kwDODunzps7CX7jz
7,699
Broken link in documentation for "Create a video dataset"
{ "avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4", "events_url": "https://api.github.com/users/cleong110/events{/privacy}", "followers_url": "https://api.github.com/users/cleong110/followers", "following_url": "https://api.github.com/users/cleong110/following{/other_user}", "gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cleong110", "id": 122366389, "login": "cleong110", "node_id": "U_kgDOB0sptQ", "organizations_url": "https://api.github.com/users/cleong110/orgs", "received_events_url": "https://api.github.com/users/cleong110/received_events", "repos_url": "https://api.github.com/users/cleong110/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cleong110/subscriptions", "type": "User", "url": "https://api.github.com/users/cleong110", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-24 19:46:28+00:00
2025-07-25 15:27:47+00:00
NaT
NONE
null
null
null
null
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken. https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset <img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7699/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7698/comments
https://api.github.com/repos/huggingface/datasets/issues/7698/events
https://github.com/huggingface/datasets/issues/7698
3,255,350,916
I_kwDODunzps7CCLaE
7,698
NotImplementedError when using streaming=True in Google Colab environment
{ "avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4", "events_url": "https://api.github.com/users/Aniket17200/events{/privacy}", "followers_url": "https://api.github.com/users/Aniket17200/followers", "following_url": "https://api.github.com/users/Aniket17200/following{/other_user}", "gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aniket17200", "id": 100470741, "login": "Aniket17200", "node_id": "U_kgDOBf0P1Q", "organizations_url": "https://api.github.com/users/Aniket17200/orgs", "received_events_url": "https://api.github.com/users/Aniket17200/received_events", "repos_url": "https://api.github.com/users/Aniket17200/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions", "type": "User", "url": "https://api.github.com/users/Aniket17200", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2025-07-23 08:04:53+00:00
2025-07-23 15:06:23+00:00
NaT
NONE
null
null
null
null
### Describe the bug When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session. ### Steps to reproduce the bug Open a new Google Colab notebook. (Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime. Run the following code: Python from datasets import load_dataset try: print("Attempting to load a stream...") streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True) print("Success!") except Exception as e: print(e) ### Expected behavior The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset. Actual Behavior The code fails and prints the following error traceback: [PASTE THE FULL ERROR TRACEBACK HERE] (Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.) ### Environment info Platform: Google Colab datasets version: [Run !pip show datasets in Colab and paste the version here] huggingface_hub version: [Run !pip show huggingface_hub and paste the version here] Python version: [Run !python --version and paste the version here]
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7698/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7697/comments
https://api.github.com/repos/huggingface/datasets/issues/7697/events
https://github.com/huggingface/datasets/issues/7697
3,254,526,399
I_kwDODunzps7B_CG_
7,697
-
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2025-07-23 01:30:32+00:00
2025-07-25 15:21:39+00:00
2025-07-25 15:21:39+00:00
NONE
null
null
null
null
-
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7697/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7696/comments
https://api.github.com/repos/huggingface/datasets/issues/7696/events
https://github.com/huggingface/datasets/issues/7696
3,253,433,350
I_kwDODunzps7B63QG
7,696
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4", "events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}", "followers_url": "https://api.github.com/users/Manalelaidouni/followers", "following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}", "gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Manalelaidouni", "id": 25346345, "login": "Manalelaidouni", "node_id": "MDQ6VXNlcjI1MzQ2MzQ1", "organizations_url": "https://api.github.com/users/Manalelaidouni/orgs", "received_events_url": "https://api.github.com/users/Manalelaidouni/received_events", "repos_url": "https://api.github.com/users/Manalelaidouni/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions", "type": "User", "url": "https://api.github.com/users/Manalelaidouni", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-22 17:02:17+00:00
2025-07-30 14:22:21+00:00
2025-07-30 14:22:21+00:00
NONE
null
null
null
null
### Describe the bug In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below). ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ds = ds.cast_column("audio", Audio(24000)) sample= ds[0]["audio"]["array"] print(sample) # sample in 3.6.0 [0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325] # sample in 4.0.0 array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863, 0.00115309], dtype=float32) ``` ### Expected behavior The same dataset should load identical samples across versions to maintain reproducibility. ### Environment info First env: - datasets version: 3.6.0 - Platform: Windows-10-10.0.26100-SP0 - Python: 3.11.0 Second env: - datasets version: 4.0.0 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python: 3.11.13
{ "avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4", "events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}", "followers_url": "https://api.github.com/users/Manalelaidouni/followers", "following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}", "gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Manalelaidouni", "id": 25346345, "login": "Manalelaidouni", "node_id": "MDQ6VXNlcjI1MzQ2MzQ1", "organizations_url": "https://api.github.com/users/Manalelaidouni/orgs", "received_events_url": "https://api.github.com/users/Manalelaidouni/received_events", "repos_url": "https://api.github.com/users/Manalelaidouni/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions", "type": "User", "url": "https://api.github.com/users/Manalelaidouni", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7696/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7695/comments
https://api.github.com/repos/huggingface/datasets/issues/7695/events
https://github.com/huggingface/datasets/pull/7695
3,251,904,843
PR_kwDODunzps6gB7jS
7,695
Support downloading specific splits in load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
4
2025-07-22 09:33:54+00:00
2025-07-28 17:33:30+00:00
2025-07-28 17:15:45+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7695.diff", "html_url": "https://github.com/huggingface/datasets/pull/7695", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7695.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7695" }
This PR builds on #6832 by @mariosasko. May close - #4101, #2538 Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 --- ### Note - This PR is under work and frequent changes will be pushed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7695/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7695/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7694/comments
https://api.github.com/repos/huggingface/datasets/issues/7694/events
https://github.com/huggingface/datasets/issues/7694
3,247,600,408
I_kwDODunzps7BknMY
7,694
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
{ "avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4", "events_url": "https://api.github.com/users/ycq0125/events{/privacy}", "followers_url": "https://api.github.com/users/ycq0125/followers", "following_url": "https://api.github.com/users/ycq0125/following{/other_user}", "gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ycq0125", "id": 49603999, "login": "ycq0125", "node_id": "MDQ6VXNlcjQ5NjAzOTk5", "organizations_url": "https://api.github.com/users/ycq0125/orgs", "received_events_url": "https://api.github.com/users/ycq0125/received_events", "repos_url": "https://api.github.com/users/ycq0125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions", "type": "User", "url": "https://api.github.com/users/ycq0125", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-21 07:51:25+00:00
2025-07-25 14:42:21+00:00
NaT
NONE
null
null
null
null
### Describe the bug When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation. This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers. <img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" /> ### Steps to reproduce the bug ``` import os from datasets import load_dataset, Dataset from loguru import logger # A public dataset to test with REPO_ID = "adam89/TinyStoriesChinese" SUBSET = "default" SPLIT = "train" NUM_ROWS_TO_LOAD = 10 # Use a reasonably large number to see the memory spike def run_test(): """Loads data into memory and then saves it, triggering the memory issue.""" logger.info("Step 1: Loading data into an in-memory Dataset object...") # Create an in-memory Dataset object from a stream # This simulates having a processed dataset ready to be saved iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True) limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD) in_memory_dataset = Dataset.from_generator(limited_stream.__iter__) logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.") output_path = "./test_output.jsonl" logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...") logger.info("Please monitor memory usage during this step.") # This is the step that causes the massive memory allocation in_memory_dataset.to_json(output_path, force_ascii=False) logger.info("Save operation complete.") os.remove(output_path) if __name__ == "__main__": # To see the memory usage clearly, run this script with a memory profiler: # python -m memray run your_script_name.py # python -m memray tree xxx.bin run_test() ``` ### Expected behavior I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset. ### Environment info datasets version:3.6.0 Python version:3.9.18 os:macOS 15.3.1 (arm64)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7694/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7693/comments
https://api.github.com/repos/huggingface/datasets/issues/7693/events
https://github.com/huggingface/datasets/issues/7693
3,246,369,678
I_kwDODunzps7Bf6uO
7,693
Dataset scripts are no longer supported, but found superb.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/114297534?v=4", "events_url": "https://api.github.com/users/edwinzajac/events{/privacy}", "followers_url": "https://api.github.com/users/edwinzajac/followers", "following_url": "https://api.github.com/users/edwinzajac/following{/other_user}", "gists_url": "https://api.github.com/users/edwinzajac/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/edwinzajac", "id": 114297534, "login": "edwinzajac", "node_id": "U_kgDOBtAKvg", "organizations_url": "https://api.github.com/users/edwinzajac/orgs", "received_events_url": "https://api.github.com/users/edwinzajac/received_events", "repos_url": "https://api.github.com/users/edwinzajac/repos", "site_admin": false, "starred_url": "https://api.github.com/users/edwinzajac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edwinzajac/subscriptions", "type": "User", "url": "https://api.github.com/users/edwinzajac", "user_view_type": "public" }
[]
open
false
null
[]
null
18
2025-07-20 13:48:06+00:00
2025-09-04 10:32:12+00:00
NaT
NONE
null
null
null
null
### Describe the bug Hello, I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions. I then get the error : ``` -------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[65], [line 1](vscode-notebook-cell:?execution_count=65&line=1) ----> [1](vscode-notebook-cell:?execution_count=65&line=1) dataset = datasets.load_dataset("superb", name="asr", split="test") 3 # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item 4 # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset 5 for out in tqdm(pipe(KeyDataset(dataset, "file"))): File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs) 1387 verification_mode = VerificationMode( 1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 1389 ) 1391 # Create a dataset builder -> [1392](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1392) builder_instance = load_dataset_builder( 1393 path=path, 1394 name=name, 1395 data_dir=data_dir, 1396 data_files=data_files, 1397 cache_dir=cache_dir, 1398 features=features, 1399 download_config=download_config, 1400 download_mode=download_mode, 1401 revision=revision, 1402 token=token, 1403 storage_options=storage_options, 1404 **config_kwargs, 1405 ) 1407 # Return iterable dataset in case of streaming 1408 if streaming: File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs) 1130 if features is not None: 1131 features = _fix_for_backward_compatible_features(features) -> [1132](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1132) dataset_module = dataset_module_factory( 1133 path, 1134 revision=revision, 1135 download_config=download_config, 1136 download_mode=download_mode, 1137 data_dir=data_dir, 1138 data_files=data_files, 1139 cache_dir=cache_dir, 1140 ) 1141 # Get dataset builder class 1142 builder_kwargs = dataset_module.builder_kwargs File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 1026 if isinstance(e1, FileNotFoundError): 1027 raise FileNotFoundError( 1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. " 1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1030 ) from None -> [1031](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:1031) raise e1 from None 1032 else: 1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.") File ~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 981 try: 982 api.hf_hub_download( 983 repo_id=path, 984 filename=filename, (...) 987 proxies=download_config.proxies, 988 ) --> [989](https://file+.vscode-resource.vscode-cdn.net/home/edwin/Desktop/debug/llm_course/~/Desktop/debug/llm_course/.venv/lib/python3.11/site-packages/datasets/load.py:989) raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") 990 except EntryNotFoundError: 991 # Use the infos from the parquet export except in some cases: 992 if data_dir or data_files or (revision and revision != "main"): RuntimeError: Dataset scripts are no longer supported, but found superb.py ``` NB : I tried to replace "superb" by "anton-l/superb_demo" but I get a 'torchcodec' importing error. Maybe I misunderstood something. ### Steps to reproduce the bug ``` import datasets from transformers import pipeline from transformers.pipelines.pt_utils import KeyDataset from tqdm.auto import tqdm pipe = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-base-960h", device=0) dataset = datasets.load_dataset("superb", name="asr", split="test") # KeyDataset (only *pt*) will simply return the item in the dict returned by the dataset item # as we're not interested in the *target* part of the dataset. For sentence pair use KeyPairDataset for out in tqdm(pipe(KeyDataset(dataset, "file"))): print(out) # {"text": "NUMBER TEN FRESH NELLY IS WAITING ON YOU GOOD NIGHT HUSBAND"} # {"text": ....} # .... ``` ### Expected behavior Get the tutorial expected results ### Environment info --- SYSTEM INFO --- Operating System: Ubuntu 24.10 Kernel: Linux 6.11.0-29-generic Architecture: x86-64 --- PYTHON --- Python 3.11.13 --- VENV INFO ---- datasets=4.0.0 transformers=4.53 tqdm=4.67.1
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7693/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7692/comments
https://api.github.com/repos/huggingface/datasets/issues/7692/events
https://github.com/huggingface/datasets/issues/7692
3,246,268,635
I_kwDODunzps7BfiDb
7,692
xopen: invalid start byte for streaming dataset with trust_remote_code=True
{ "avatar_url": "https://avatars.githubusercontent.com/u/5188731?v=4", "events_url": "https://api.github.com/users/sedol1339/events{/privacy}", "followers_url": "https://api.github.com/users/sedol1339/followers", "following_url": "https://api.github.com/users/sedol1339/following{/other_user}", "gists_url": "https://api.github.com/users/sedol1339/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sedol1339", "id": 5188731, "login": "sedol1339", "node_id": "MDQ6VXNlcjUxODg3MzE=", "organizations_url": "https://api.github.com/users/sedol1339/orgs", "received_events_url": "https://api.github.com/users/sedol1339/received_events", "repos_url": "https://api.github.com/users/sedol1339/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sedol1339/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sedol1339/subscriptions", "type": "User", "url": "https://api.github.com/users/sedol1339", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-20 11:08:20+00:00
2025-07-25 14:38:54+00:00
NaT
NONE
null
null
null
null
### Describe the bug I am trying to load YODAS2 dataset with datasets==3.6.0 ``` from datasets import load_dataset next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True))) ``` And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte` The cause of the error is the following: ``` from datasets.utils.file_utils import xopen filepath = 'https://huggingface.co/datasets/espnet/yodas2/resolve/c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json' xopen(filepath, 'r').read() >>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte ``` And the cause of this is the following: ``` import fsspec fsspec.open( 'hf://datasets/espnet/yodas2@c9674490249665d658f527e2684848377108d82c/data/ru000/text/00000000.json', mode='r', hf={'token': None, 'endpoint': 'https://huggingface.co'}, ).open().read() >>> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte ``` Is it true that streaming=True loading is not supported anymore for trust_remote_code=True, even with datasets==3.6.0? This breaks backward compatibility. ### Steps to reproduce the bug ``` from datasets import load_dataset next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True))) ``` ### Expected behavior No errors expected ### Environment info datasets==3.6.0, ubuntu 24.04
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7692/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7692/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7691/comments
https://api.github.com/repos/huggingface/datasets/issues/7691/events
https://github.com/huggingface/datasets/issues/7691
3,245,547,170
I_kwDODunzps7Bcx6i
7,691
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4", "events_url": "https://api.github.com/users/cleong110/events{/privacy}", "followers_url": "https://api.github.com/users/cleong110/followers", "following_url": "https://api.github.com/users/cleong110/following{/other_user}", "gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cleong110", "id": 122366389, "login": "cleong110", "node_id": "U_kgDOB0sptQ", "organizations_url": "https://api.github.com/users/cleong110/orgs", "received_events_url": "https://api.github.com/users/cleong110/received_events", "repos_url": "https://api.github.com/users/cleong110/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cleong110/subscriptions", "type": "User", "url": "https://api.github.com/users/cleong110", "user_view_type": "public" }
[]
open
false
null
[]
null
5
2025-07-19 18:40:27+00:00
2025-07-25 08:51:10+00:00
NaT
NONE
null
null
null
null
### Describe the bug I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming. I made a config for the dataset that specifically includes just one problem shard, and the error triggers the instant you even run load_dataset(), even with streaming=True ``` ds = load_dataset("bible-nlp/sign-bibles", "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard", streaming=True, split="train") ``` This gives: ``` File "/opt/home/cleong/projects/semantic_and_visual_similarity/sign-bibles-dataset/sign_bibles_dataset/tasks/test_iteration.py", line 13, in iterate_keys ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train") File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/load.py", line 1409, in load_dataset return builder_instance.as_streaming_dataset(split=split) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/builder.py", line 1225, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} ~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/opt/home/cleong/envs/sign-bibles-dataset/lib/python3.13/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 88, in _split_generators pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True)) ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 2046, in pyarrow.lib._Tabular.from_pylist File "pyarrow/table.pxi", line 6431, in pyarrow.lib._from_pylist File "pyarrow/table.pxi", line 4893, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1607, in pyarrow.lib._sanitize_arrays File "pyarrow/table.pxi", line 1588, in pyarrow.lib._schema_from_arrays File "pyarrow/array.pxi", line 375, in pyarrow.lib.array File "pyarrow/array.pxi", line 45, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 3980158992 ``` ### Steps to reproduce the bug ```python #!/usr/bin/env python import argparse from datasets import get_dataset_config_names, load_dataset from tqdm import tqdm from pyarrow.lib import ArrowCapacityError, ArrowInvalid def iterate_keys(language_subset: str) -> None: """Iterate over all samples in the Sign Bibles dataset and print idx and sample key.""" # https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/loading_methods#datasets.load_dataset ds = load_dataset("bible-nlp/sign-bibles", language_subset, streaming=True, split="train") print(f"\n==> Loaded dataset config '{language_subset}'") idx = 0 estimated_shard_index = 0 samples_per_shard = 5 with tqdm(desc=f"{language_subset} samples") as pbar: iterator = iter(ds) while True: try: if idx % samples_per_shard == 0 and idx > 0: # 5 samples per shard: 0, 1, 2, 3, 4 print(f"Estimated Shard idx (starting at 0, {samples_per_shard}/shard): {estimated_shard_index}") estimated_shard_index += 1 sample = next(iterator) sample_key = sample.get("__key__", "missing-key") print(f"[{language_subset}] idx={idx}, key={sample_key}") idx += 1 pbar.update(1) except StopIteration: print(f"Finished iterating through {idx} samples of {language_subset}") break except (ArrowCapacityError, ArrowInvalid) as e: print(f"PyArrow error on idx={idx}, config={language_subset}: {e}") idx += 1 pbar.update(1) continue except KeyError as e: print(f"Missing key error on idx={idx}, config={language_subset}: {e}") idx += 1 pbar.update(1) continue def main(): configs = get_dataset_config_names("bible-nlp/sign-bibles") print(f"Available configs: {configs}") configs = [ "ase_chronological_bible_translation_in_american_sign_language_119_introductions_and_passages_debugging_problem_shard" ] for language_subset in configs: print(f"TESTING CONFIG {language_subset}") iterate_keys(language_subset) # try: # except (ArrowCapacityError, ArrowInvalid) as e: # print(f"PyArrow error at config level for {language_subset}: {e}") # continue # except RuntimeError as e: # print(f"RuntimeError at config level for {language_subset}: {e}") # continue if __name__ == "__main__": parser = argparse.ArgumentParser(description="Iterate through Sign Bibles dataset and print sample keys.") args = parser.parse_args() main() ``` ### Expected behavior I expect, when I load with streaming=True, that there should not be any data loaded or anything like that. https://huggingface.co/docs/datasets/main/en/package_reference/loading_methods#datasets.load_dataset says that with streaming=true, I did expect to have some trouble with large files, but that the streaming mode would not actually try to load them unless requested, e.g. with sample["mp4"] >In the streaming case: > Don’t download or cache anything. Instead, the dataset is lazily loaded and will be streamed on-the-fly when iterating on it. ### Environment info Local setup: Conda environment on Ubuntu, pip list includes the following datasets 4.0.0 pyarrow 20.0.0 Verified on Colab: https://colab.research.google.com/drive/1HdN8stlROWrLSYXUoNeV0vQ9pClhIVM8?usp=sharing, though there it crashes by using up all available RAM
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7691/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7691/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7690/comments
https://api.github.com/repos/huggingface/datasets/issues/7690/events
https://github.com/huggingface/datasets/pull/7690
3,244,380,691
PR_kwDODunzps6fozag
7,690
HDF5 support
{ "avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4", "events_url": "https://api.github.com/users/klamike/events{/privacy}", "followers_url": "https://api.github.com/users/klamike/followers", "following_url": "https://api.github.com/users/klamike/following{/other_user}", "gists_url": "https://api.github.com/users/klamike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/klamike", "id": 17013474, "login": "klamike", "node_id": "MDQ6VXNlcjE3MDEzNDc0", "organizations_url": "https://api.github.com/users/klamike/orgs", "received_events_url": "https://api.github.com/users/klamike/received_events", "repos_url": "https://api.github.com/users/klamike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/klamike/subscriptions", "type": "User", "url": "https://api.github.com/users/klamike", "user_view_type": "public" }
[]
closed
false
null
[]
null
8
2025-07-18 21:09:41+00:00
2025-08-19 15:18:58+00:00
2025-08-19 13:28:53+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7690.diff", "html_url": "https://github.com/huggingface/datasets/pull/7690", "merged_at": "2025-08-19T13:28:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/7690.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7690" }
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by using `Features(dict)`. All datasets within the HDF5 file should have rows on the first dimension (groups/subgroups are still allowed). Closes #3113. Replaces #7625 which only supports a relatively small subset of HDF5.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7690/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7690/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7689/comments
https://api.github.com/repos/huggingface/datasets/issues/7689/events
https://github.com/huggingface/datasets/issues/7689
3,242,580,301
I_kwDODunzps7BRdlN
7,689
BadRequestError for loading dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/45011687?v=4", "events_url": "https://api.github.com/users/WPoelman/events{/privacy}", "followers_url": "https://api.github.com/users/WPoelman/followers", "following_url": "https://api.github.com/users/WPoelman/following{/other_user}", "gists_url": "https://api.github.com/users/WPoelman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WPoelman", "id": 45011687, "login": "WPoelman", "node_id": "MDQ6VXNlcjQ1MDExNjg3", "organizations_url": "https://api.github.com/users/WPoelman/orgs", "received_events_url": "https://api.github.com/users/WPoelman/received_events", "repos_url": "https://api.github.com/users/WPoelman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WPoelman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WPoelman/subscriptions", "type": "User", "url": "https://api.github.com/users/WPoelman", "user_view_type": "public" }
[]
closed
false
null
[]
null
17
2025-07-18 09:30:04+00:00
2025-07-18 11:59:51+00:00
2025-07-18 11:52:29+00:00
NONE
null
null
null
null
### Describe the bug Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error: ``` huggingface_hub.errors.BadRequestError: (Request ID: ...) Bad request: * Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand ✖ Invalid input: expected array, received string → at paths ✖ Invalid input: expected boolean, received string → at expand ``` I tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both. What can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution. ### Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset("Helsinki-NLP/europarl", "en-fr", streaming=True, trust_remote_code=True)["train"] ``` ### Expected behavior That the dataset loads as it did a couple days ago. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.2 - PyArrow version: 20.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sergiopaniego", "id": 17179696, "login": "sergiopaniego", "node_id": "MDQ6VXNlcjE3MTc5Njk2", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "type": "User", "url": "https://api.github.com/users/sergiopaniego", "user_view_type": "public" }
{ "+1": 23, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 23, "url": "https://api.github.com/repos/huggingface/datasets/issues/7689/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7689/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7688/comments
https://api.github.com/repos/huggingface/datasets/issues/7688/events
https://github.com/huggingface/datasets/issues/7688
3,238,851,443
I_kwDODunzps7BDPNz
7,688
No module named "distributed"
{ "avatar_url": "https://avatars.githubusercontent.com/u/45058324?v=4", "events_url": "https://api.github.com/users/yingtongxiong/events{/privacy}", "followers_url": "https://api.github.com/users/yingtongxiong/followers", "following_url": "https://api.github.com/users/yingtongxiong/following{/other_user}", "gists_url": "https://api.github.com/users/yingtongxiong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yingtongxiong", "id": 45058324, "login": "yingtongxiong", "node_id": "MDQ6VXNlcjQ1MDU4MzI0", "organizations_url": "https://api.github.com/users/yingtongxiong/orgs", "received_events_url": "https://api.github.com/users/yingtongxiong/received_events", "repos_url": "https://api.github.com/users/yingtongxiong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yingtongxiong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yingtongxiong/subscriptions", "type": "User", "url": "https://api.github.com/users/yingtongxiong", "user_view_type": "public" }
[]
open
false
null
[]
null
3
2025-07-17 09:32:35+00:00
2025-07-25 15:14:19+00:00
NaT
NONE
null
null
null
null
### Describe the bug hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this? ### Steps to reproduce the bug 1. pip install datasets 2. from datasets.distributed import split_dataset_by_node ### Expected behavior expecting the command "from datasets.distributed import split_dataset_by_node" can be ran successfully ### Environment info python: 3.12
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7688/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7688/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7687/comments
https://api.github.com/repos/huggingface/datasets/issues/7687/events
https://github.com/huggingface/datasets/issues/7687
3,238,760,301
I_kwDODunzps7BC49t
7,687
Datasets keeps rebuilding the dataset every time i call the python script
{ "avatar_url": "https://avatars.githubusercontent.com/u/58883113?v=4", "events_url": "https://api.github.com/users/CALEB789/events{/privacy}", "followers_url": "https://api.github.com/users/CALEB789/followers", "following_url": "https://api.github.com/users/CALEB789/following{/other_user}", "gists_url": "https://api.github.com/users/CALEB789/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CALEB789", "id": 58883113, "login": "CALEB789", "node_id": "MDQ6VXNlcjU4ODgzMTEz", "organizations_url": "https://api.github.com/users/CALEB789/orgs", "received_events_url": "https://api.github.com/users/CALEB789/received_events", "repos_url": "https://api.github.com/users/CALEB789/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CALEB789/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CALEB789/subscriptions", "type": "User", "url": "https://api.github.com/users/CALEB789", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-17 09:03:38+00:00
2025-07-25 15:21:31+00:00
NaT
NONE
null
null
null
null
### Describe the bug Every time it runs, somehow, samples increase. This can cause a 12mb dataset to have other built versions of 400 mbs+ <img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" /> ### Steps to reproduce the bug `from datasets import load_dataset s = load_dataset('~/.cache/huggingface/datasets/databricks___databricks-dolly-15k')['train'] ` 1. A dataset needs to be available in the .cache folder 2. Run the code multiple times, and every time it runs, more versions are created ### Expected behavior The number of samples increases every time the script runs ### Environment info - `datasets` version: 3.6.0 - Platform: Windows-11-10.0.26100-SP0 - Python version: 3.13.3 - `huggingface_hub` version: 0.32.3 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7687/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7686/comments
https://api.github.com/repos/huggingface/datasets/issues/7686/events
https://github.com/huggingface/datasets/issues/7686
3,237,201,090
I_kwDODunzps7A88TC
7,686
load_dataset does not check .no_exist files in the hub cache
{ "avatar_url": "https://avatars.githubusercontent.com/u/3627235?v=4", "events_url": "https://api.github.com/users/jmaccarl/events{/privacy}", "followers_url": "https://api.github.com/users/jmaccarl/followers", "following_url": "https://api.github.com/users/jmaccarl/following{/other_user}", "gists_url": "https://api.github.com/users/jmaccarl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jmaccarl", "id": 3627235, "login": "jmaccarl", "node_id": "MDQ6VXNlcjM2MjcyMzU=", "organizations_url": "https://api.github.com/users/jmaccarl/orgs", "received_events_url": "https://api.github.com/users/jmaccarl/received_events", "repos_url": "https://api.github.com/users/jmaccarl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jmaccarl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmaccarl/subscriptions", "type": "User", "url": "https://api.github.com/users/jmaccarl", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-07-16 20:04:00+00:00
2025-07-16 20:04:00+00:00
NaT
NONE
null
null
null
null
### Describe the bug I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack. The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wrapper APIs that do. This is because the `utils.file_utils.cached_path` used directly calls `hf_hub_download` instead of using `file_download.try_to_load_from_cache` from `huggingface_hub` (see `transformers` library `utils.hub.cached_files` for one alternate example). This results in unnecessary metadata HTTP requests occurring for files that don't exist on every call. It won't generate the .no_exist cache files, nor will it use them. ### Steps to reproduce the bug Run the following snippet as one example (setting cache dirs to clean paths for clarity) `env HF_HOME=~/local_hf_hub python repro.py` ``` from datasets import load_dataset import huggingface_hub # monkeypatch to print out metadata requests being made original_get_hf_file_metadata = huggingface_hub.file_download.get_hf_file_metadata def get_hf_file_metadata_wrapper(*args, **kwargs): print("File metadata request made (get_hf_file_metadata):", args, kwargs) return original_get_hf_file_metadata(*args, **kwargs) # Apply the patch huggingface_hub.file_download.get_hf_file_metadata = get_hf_file_metadata_wrapper dataset = load_dataset( "Salesforce/wikitext", "wikitext-2-v1", split="test", trust_remote_code=True, cache_dir="~/local_datasets", revision="b08601e04326c79dfdd32d625aee71d232d685c3", ) ``` This may be called over and over again, and you will see the same calls for files that don't exist: ``` File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/wikitext.py', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None} File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/.huggingface.yaml', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None} File metadata request made (get_hf_file_metadata): () {'url': 'https://huggingface.co/datasets/Salesforce/wikitext/resolve/b08601e04326c79dfdd32d625aee71d232d685c3/dataset_infos.json', 'proxies': None, 'timeout': 10, 'headers': {'user-agent': 'datasets/3.6.0; hf_hub/0.33.2; python/3.12.11; torch/2.7.0; huggingface_hub/0.33.2; pyarrow/20.0.0; jax/0.5.3'}, 'token': None} ``` And you can see that the .no_exist folder is never created ``` $ ls ~/local_hf_hub/hub/datasets--Salesforce--wikitext/ blobs refs snapshots ``` ### Expected behavior The expected behavior is for the print "File metadata request made" to stop after the first call, and for .no_exist directory & files to be populated under ~/local_hf_hub/hub/datasets--Salesforce--wikitext/ ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.5.13-65-650-4141-22041-coreweave-amd64-85c45edc-x86_64-with-glibc2.35 - Python version: 3.12.11 - `huggingface_hub` version: 0.33.2 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2024.9.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7686/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7686/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7685/comments
https://api.github.com/repos/huggingface/datasets/issues/7685/events
https://github.com/huggingface/datasets/issues/7685
3,236,979,340
I_kwDODunzps7A8GKM
7,685
Inconsistent range request behavior for parquet REST api
{ "avatar_url": "https://avatars.githubusercontent.com/u/21327470?v=4", "events_url": "https://api.github.com/users/universalmind303/events{/privacy}", "followers_url": "https://api.github.com/users/universalmind303/followers", "following_url": "https://api.github.com/users/universalmind303/following{/other_user}", "gists_url": "https://api.github.com/users/universalmind303/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/universalmind303", "id": 21327470, "login": "universalmind303", "node_id": "MDQ6VXNlcjIxMzI3NDcw", "organizations_url": "https://api.github.com/users/universalmind303/orgs", "received_events_url": "https://api.github.com/users/universalmind303/received_events", "repos_url": "https://api.github.com/users/universalmind303/repos", "site_admin": false, "starred_url": "https://api.github.com/users/universalmind303/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/universalmind303/subscriptions", "type": "User", "url": "https://api.github.com/users/universalmind303", "user_view_type": "public" }
[]
open
false
null
[]
null
5
2025-07-16 18:39:44+00:00
2025-08-11 08:16:54+00:00
NaT
NONE
null
null
null
null
### Describe the bug First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere. The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. More often than not, I am seeing 416, but other times for an identical request, it gives me the data along with `206 Partial Content` as expected. ### Steps to reproduce the bug repeating this request multiple times will return either 416 or 206. ```sh $ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" ``` Note: this is not limited to just the above file, I tried with many different datasets and am able to consistently reproduce issue across multiple datasets. when the 416 is returned, I get the following headers ``` < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:43 GMT < expires: Wed, 16 Jul 2025 14:58:43 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront) < ``` this suggests to me that there is likely a CDN/caching/routing issue happening and the request is not getting routed properly. Full verbose output via curl. <details> ❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:41 GMT < expires: Wed, 16 Jul 2025 14:58:41 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 e2f1bed2f82641d6d5439eac20a790ba.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: Mo8hn-EZLJqE_hoBday8DdhmVXhV3v9-Wg-EEHI6gX_fNlkanVIUBA== < { [49 bytes data] 100 49 100 49 0 0 2215 0 --:--:-- --:--:-- --:--:-- 2227 * Connection #0 to host huggingface.co left intact (.venv) Daft main*​* ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:42 GMT < expires: Wed, 16 Jul 2025 14:58:42 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 bb352451e1eacf85f8786ee3ecd07eca.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: 9xy-CX9KvlS8Ye4eFr8jXMDobZHFkvdyvkLJGmK_qiwZQywCCwfq7Q== < { [49 bytes data] 100 49 100 49 0 0 2381 0 --:--:-- --:--:-- --:--:-- 2450 * Connection #0 to host huggingface.co left intact (.venv) Daft main*​* ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 416 < content-type: text/html < content-length: 49 < server: CloudFront < date: Wed, 16 Jul 2025 14:58:43 GMT < expires: Wed, 16 Jul 2025 14:58:43 GMT < content-range: bytes */177 < x-cache: Error from cloudfront < via: 1.1 873527676a354c5998cad133525df9c0.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: wtBgwY4u4YJ2pD1ovM8UV770UiJoqWfs7i7VzschDyoLv5g7swGGmw== < { [49 bytes data] 100 49 100 49 0 0 2273 0 --:--:-- --:--:-- --:--:-- 2333 * Connection #0 to host huggingface.co left intact (.venv) Daft main*​* ≡❯ curl -v -L -H "Range: bytes=217875070-218006142" -o output.parquet "https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host huggingface.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.102.96, 18.160.102.110, 18.160.102.4, 18.160.102.86 * Trying 18.160.102.96:443... * Connected to huggingface.co (18.160.102.96) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [319 bytes data] * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3821 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=huggingface.co * start date: Apr 13 00:00:00 2025 GMT * expire date: May 12 23:59:59 2026 GMT * subjectAltName: host "huggingface.co" matched cert's "huggingface.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: huggingface.co] * [HTTP/2] [1] [:path: /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 302 < content-type: text/plain; charset=utf-8 < content-length: 177 < location: https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet < date: Wed, 16 Jul 2025 14:58:44 GMT < x-powered-by: huggingface-moon < cross-origin-opener-policy: same-origin < referrer-policy: strict-origin-when-cross-origin < x-request-id: Root=1-6877be24-476860f03849cb1a1570c9d8 < access-control-allow-origin: https://huggingface.co < access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash < set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None < set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax < x-cache: Miss from cloudfront < via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: xuSi0X5RpH1OZqQOM8gGQLQLU8eOM6Gbkk-bgIX_qBnTTaa1VNkExA== < * Ignoring the response-body 100 177 100 177 0 0 2021 0 --:--:-- --:--:-- --:--:-- 2034 * Connection #0 to host huggingface.co left intact * Issue another request to this URL: 'https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet' * Found bundle for host: 0x600002d54570 [can multiplex] * Re-using existing connection with host huggingface.co * [HTTP/2] [3] OPENED stream for https://huggingface.co/datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet * [HTTP/2] [3] [:method: GET] * [HTTP/2] [3] [:scheme: https] * [HTTP/2] [3] [:authority: huggingface.co] * [HTTP/2] [3] [:path: /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet] * [HTTP/2] [3] [user-agent: curl/8.7.1] * [HTTP/2] [3] [accept: */*] * [HTTP/2] [3] [range: bytes=217875070-218006142] > GET /datasets/HuggingFaceTB/smoltalk2/resolve/refs%2Fconvert%2Fparquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0000.parquet HTTP/2 > Host: huggingface.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 302 < content-type: text/plain; charset=utf-8 < content-length: 1317 < location: https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC < date: Wed, 16 Jul 2025 14:58:44 GMT < x-powered-by: huggingface-moon < cross-origin-opener-policy: same-origin < referrer-policy: strict-origin-when-cross-origin < x-request-id: Root=1-6877be24-4f628b292dc8a7a5339c41d3 < access-control-allow-origin: https://huggingface.co < vary: Origin, Accept < access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash < set-cookie: token=; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=None < set-cookie: token=; Domain=huggingface.co; Path=/; Expires=Thu, 01 Jan 1970 00:00:00 GMT; Secure; SameSite=Lax < x-repo-commit: 712df366ffbc959d9f4279bf2da579230b7ca5d8 < accept-ranges: bytes < x-linked-size: 218006142 < x-linked-etag: "01736bf26d0046ddec4ab8900fba3f0dc6500b038314b44d0edb73a7c88dec07" < x-xet-hash: cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9 < link: <https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/xet-read-token/712df366ffbc959d9f4279bf2da579230b7ca5d8>; rel="xet-auth", <https://cas-server.xethub.hf.co/reconstruction/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9>; rel="xet-reconstruction-info" < x-cache: Miss from cloudfront < via: 1.1 dd5af138aa8a11d8a70d5ef690ad1a2a.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P1 < x-amz-cf-id: 0qXw2sJGrWCLVt7c-Vtn09uE3nu6CrJw9RmAKvNr_flG75muclvlIg== < * Ignoring the response-body 100 1317 100 1317 0 0 9268 0 --:--:-- --:--:-- --:--:-- 9268 * Connection #0 to host huggingface.co left intact * Issue another request to this URL: 'https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC' * Host cas-bridge.xethub.hf.co:443 was resolved. * IPv6: (none) * IPv4: 18.160.181.55, 18.160.181.54, 18.160.181.52, 18.160.181.88 * Trying 18.160.181.55:443... * Connected to cas-bridge.xethub.hf.co (18.160.181.55) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): } [328 bytes data] * (304) (IN), TLS handshake, Server hello (2): { [122 bytes data] * (304) (IN), TLS handshake, Unknown (8): { [19 bytes data] * (304) (IN), TLS handshake, Certificate (11): { [3818 bytes data] * (304) (IN), TLS handshake, CERT verify (15): { [264 bytes data] * (304) (IN), TLS handshake, Finished (20): { [36 bytes data] * (304) (OUT), TLS handshake, Finished (20): } [36 bytes data] * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=cas-bridge.xethub.hf.co * start date: Jun 4 00:00:00 2025 GMT * expire date: Jul 3 23:59:59 2026 GMT * subjectAltName: host "cas-bridge.xethub.hf.co" matched cert's "cas-bridge.xethub.hf.co" * issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M04 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://cas-bridge.xethub.hf.co/xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC * [HTTP/2] [1] [:method: GET] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: cas-bridge.xethub.hf.co] * [HTTP/2] [1] [:path: /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [range: bytes=217875070-218006142] > GET /xet-bridge-us/686fc33898943c873b45c9a0/cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20250716%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20250716T145416Z&X-Amz-Expires=3600&X-Amz-Signature=21a15b50740d73fd8ce82d5105733ca067d2e612ada22570e09e93ebcc7f8842&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%270000.parquet%3B+filename%3D%220000.parquet%22%3B&x-id=GetObject&Expires=1752681256&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjY4MTI1Nn19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82ODZmYzMzODk4OTQzYzg3M2I0NWM5YTAvY2Y4YTNhNTY2NWNmOGIyZmY2NjdmYjUyMzZhMWU1Y2IxM2M3NTgyOTU1Zjk1MzNjODhlMTM4Nzk5N2VmM2FmOSoifV19&Signature=Tl3xorJ-7yaWvG6Y1AhhRlV2Wko9QpoK1tdPOfNZaRbHo%7EdaAkJRJfcLAYD5YzozfHWBZMLlJsaMPJ1MAne21nr5%7E737sE6yLfBwHdP3ZFZhgrLsN%7EvkIWK2GYX543qTg-pVsf3it92w1oWyoyYNQ9srxLfEIuG2AKV2Nu3Ejl7S%7EaAq4Gv4jNemvRTLBFGgYPdUeuavudl4OD4RGkSGTnpzh-P-OBk5WvgpdZZnbb1cRAP73tFHsPDX4%7ETfQIor109G%7E0TB3Jq0wopO9WV0sMQyQs9peZc6bxONiTxb9aHM4yNvWNbVGtlPuC6YS4c9T1e9%7EehdgU4sDOI%7EhpaCvg__&Key-Pair-Id=K2L8F4GPSG1IFC HTTP/2 > Host: cas-bridge.xethub.hf.co > User-Agent: curl/8.7.1 > Accept: */* > Range: bytes=217875070-218006142 > * Request completely sent off < HTTP/2 206 < content-length: 131072 < date: Mon, 14 Jul 2025 08:40:28 GMT < x-request-id: 01K041FDPVA03RR2PRXDZSN30G < content-disposition: inline; filename*=UTF-8''0000.parquet; filename="0000.parquet"; < cache-control: public, max-age=31536000 < etag: "cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9" < access-control-allow-origin: * < access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag < access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache < x-cache: Hit from cloudfront < via: 1.1 1c857e24a4dc84d2d9c78d5b3463bed6.cloudfront.net (CloudFront) < x-amz-cf-pop: MSP50-P2 < x-amz-cf-id: 3SxFmQa5wLeeXbNiwaAo0_RwoR_n7-SivjsLjDLG-Pwn5UhG2oiEQA== < age: 195496 < content-security-policy: default-src 'none'; sandbox < content-range: bytes 217875070-218006141/218006142 < { [8192 bytes data] 100 128k 100 128k 0 0 769k 0 --:--:-- --:--:-- --:--:-- 769k * Connection #1 to host cas-bridge.xethub.hf.co left intact </details> ### Expected behavior always get back a `206` ### Environment info n/a
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7685/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7685/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7684/comments
https://api.github.com/repos/huggingface/datasets/issues/7684/events
https://github.com/huggingface/datasets/pull/7684
3,231,680,474
PR_kwDODunzps6e9SjQ
7,684
fix audio cast storage from array + sampling_rate
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-15 10:13:42+00:00
2025-07-15 10:24:08+00:00
2025-07-15 10:24:07+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7684.diff", "html_url": "https://github.com/huggingface/datasets/pull/7684", "merged_at": "2025-07-15T10:24:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/7684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7684" }
fix https://github.com/huggingface/datasets/issues/7682
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7684/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7684/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7683/comments
https://api.github.com/repos/huggingface/datasets/issues/7683/events
https://github.com/huggingface/datasets/pull/7683
3,231,553,161
PR_kwDODunzps6e82iW
7,683
Convert to string when needed + faster .zstd
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-15 09:37:44+00:00
2025-07-15 10:13:58+00:00
2025-07-15 10:13:56+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7683.diff", "html_url": "https://github.com/huggingface/datasets/pull/7683", "merged_at": "2025-07-15T10:13:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/7683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7683" }
for https://huggingface.co/datasets/allenai/olmo-mix-1124
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7683/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7683/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7682/comments
https://api.github.com/repos/huggingface/datasets/issues/7682/events
https://github.com/huggingface/datasets/issues/7682
3,229,687,253
I_kwDODunzps7AgR3V
7,682
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/163345686?v=4", "events_url": "https://api.github.com/users/luatil-cloud/events{/privacy}", "followers_url": "https://api.github.com/users/luatil-cloud/followers", "following_url": "https://api.github.com/users/luatil-cloud/following{/other_user}", "gists_url": "https://api.github.com/users/luatil-cloud/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/luatil-cloud", "id": 163345686, "login": "luatil-cloud", "node_id": "U_kgDOCbx1Fg", "organizations_url": "https://api.github.com/users/luatil-cloud/orgs", "received_events_url": "https://api.github.com/users/luatil-cloud/received_events", "repos_url": "https://api.github.com/users/luatil-cloud/repos", "site_admin": false, "starred_url": "https://api.github.com/users/luatil-cloud/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luatil-cloud/subscriptions", "type": "User", "url": "https://api.github.com/users/luatil-cloud", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-14 18:41:02+00:00
2025-07-15 12:10:39+00:00
2025-07-15 10:24:08+00:00
NONE
null
null
null
null
### Describe the bug Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails in version 4.0.0 but not in version 3.6.0 ### Steps to reproduce the bug The following `uv script` should be able to reproduce the bug in version 4.0.0 and pass in version 3.6.0 on a macOS Sequoia 15.5 ```python # /// script # requires-python = ">=3.13" # dependencies = [ # "datasets[audio]==4.0.0", # "librosa>=0.11.0", # ] # /// # NAME # create_audio_dataset.py - create an audio dataset of sine waves # # SYNOPSIS # uv run create_audio_dataset.py # # DESCRIPTION # Create an audio dataset using the Hugging Face [datasets] library. # Illustrates how to create synthetic audio datasets using the [map] # datasets function. # # The strategy is to first create a dataset with the input to the # generation function, then execute the map function that generates # the result, and finally cast the final features. # # BUG # Casting features with Audio for numpy arrays - # done here with `ds.map(gen_sine, features=features)` fails # in version 4.0.0 but not in version 3.6.0 # # This happens both in cases where --extra audio is provided and where is not. # When audio is not provided i've installed the latest compatible version # of soundfile. # # The error when soundfile is installed but the audio --extra is not # indicates that the array values do not have the `.T` property, # whilst also indicating that the value is a list instead of a numpy array. # # Last lines of error report when for datasets + soundfile case # ... # # File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 239, in cast_storage # storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) # ~~~~~~~~~~~~~~~~~~~~~~^^^ # File "/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py", line 122, in encode_example # sf.write(buffer, value["array"].T, value["sampling_rate"], format="wav") # ^^^^^^^^^^^^^^^^ # AttributeError: 'list' object has no attribute 'T' # ... # # For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg # error. # # Last lines of error report: # # ... # RuntimeError: Could not load libtorchcodec. Likely causes: # 1. FFmpeg is not properly installed in your environment. We support # versions 4, 5, 6 and 7. # 2. The PyTorch version (2.7.1) is not compatible with # this version of TorchCodec. Refer to the version compatibility # table: # https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec. # 3. Another runtime dependency; see exceptions below. # The following exceptions were raised as we tried to load libtorchcodec: # # [start of libtorchcodec loading traceback] # FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib # Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib # Reason: no LC_RPATH's found # FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib # Referenced from: <BD3B44FC-E14B-3ABF-800F-BB54B6CCA3B1> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib # Reason: no LC_RPATH's found # FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib # Referenced from: <F06EBF8A-238C-3A96-BFBB-B34E0BBDABF0> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib # Reason: no LC_RPATH's found # FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib # Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib # Reason: no LC_RPATH's found # ... # # This is strange because the the same error does not happen when using version 3.6.0 with datasets[audio]. # # The same error appears in python3.12 ## Imports import numpy as np from datasets import Dataset, Features, Audio, Value ## Parameters NUM_WAVES = 128 SAMPLE_RATE = 16_000 DURATION = 1.0 ## Input dataset arguments freqs = np.linspace(100, 2000, NUM_WAVES).tolist() ds = Dataset.from_dict({"frequency": freqs}) ## Features for the final dataset features = Features( {"frequency": Value("float32"), "audio": Audio(sampling_rate=SAMPLE_RATE)} ) ## Generate audio sine waves and cast features def gen_sine(example): t = np.linspace(0, DURATION, int(SAMPLE_RATE * DURATION), endpoint=False) wav = np.sin(2 * np.pi * example["frequency"] * t) return { "frequency": example["frequency"], "audio": {"array": wav, "sampling_rate": SAMPLE_RATE}, } ds = ds.map(gen_sine, features=features) print(ds) print(ds.features) ``` ### Expected behavior I expect the result of version `4.0.0` to be the same of that in version `3.6.0`. Switching the value of the script above to `3.6.0` I get the following, expected, result: ``` $ uv run bug_report.py Map: 100%|███████████████████████████████████████████████████████| 128/128 [00:00<00:00, 204.58 examples/s] Dataset({ features: ['frequency', 'audio'], num_rows: 128 }) {'frequency': Value(dtype='float32', id=None), 'audio': Audio(sampling_rate=16000, mono=True, decode=True, id=None)} ``` ### Environment info - `datasets` version: 4.0.0 - Platform: macOS-15.5-arm64-arm-64bit-Mach-O - Python version: 3.13.1 - `huggingface_hub` version: 0.33.4 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7682/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7682/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7681/comments
https://api.github.com/repos/huggingface/datasets/issues/7681/events
https://github.com/huggingface/datasets/issues/7681
3,227,112,736
I_kwDODunzps7AWdUg
7,681
Probabilistic High Memory Usage and Freeze on Python 3.10
{ "avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4", "events_url": "https://api.github.com/users/ryan-minato/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-minato/followers", "following_url": "https://api.github.com/users/ryan-minato/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-minato/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-minato", "id": 82735346, "login": "ryan-minato", "node_id": "MDQ6VXNlcjgyNzM1MzQ2", "organizations_url": "https://api.github.com/users/ryan-minato/orgs", "received_events_url": "https://api.github.com/users/ryan-minato/received_events", "repos_url": "https://api.github.com/users/ryan-minato/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-minato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-minato/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-minato", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-07-14 01:57:16+00:00
2025-07-14 01:57:16+00:00
NaT
NONE
null
null
null
null
### Describe the bug A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, the process becomes unresponsive, cannot be forcefully terminated, and does not throw any exceptions. I have attempted to mitigate this issue by setting `datasets.config.IN_MEMORY_MAX_SIZE`, but it had no effect. In fact, based on the document of `load_dataset`, I suspect that setting `IN_MEMORY_MAX_SIZE` might even have a counterproductive effect. This bug is not consistently reproducible, but its occurrence rate significantly decreases or disappears entirely when upgrading Python to version 3.11 or higher. Therefore, this issue also serves to share a potential solution for others who might encounter similar problems. ### Steps to reproduce the bug Due to the probabilistic nature of this bug, consistent reproduction cannot be guaranteed for every run. However, in my environment, processing large datasets like timm/imagenet-1k-wds(whether reading, casting, or mapping operations) almost certainly triggers the issue at some point. The probability of the issue occurring drastically increases when num_proc is set to a value greater than 1 during operations. When the issue occurs, my system logs repeatedly show the following warnings: ``` WARN: very high memory utilization: 57.74GiB / 57.74GiB (100 %) WARN: container is unhealthy: triggered memory limits (OOM) WARN: container is unhealthy: triggered memory limits (OOM) WARN: container is unhealthy: triggered memory limits (OOM) ``` ### Expected behavior The dataset should be read and processed normally without memory exhaustion or freezing. If an unrecoverable error occurs, an appropriate exception should be raised. I have found that upgrading Python to version 3.11 or above completely resolves this issue. On Python 3.11, when memory usage approaches 100%, it suddenly drops before slowly increasing again. I suspect this behavior is due to an expected memory management action, possibly involving writing to disk cache, which prevents the complete freeze observed in Python 3.10. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-5.15.0-71-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.33.4 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7681/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7681/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7680/comments
https://api.github.com/repos/huggingface/datasets/issues/7680/events
https://github.com/huggingface/datasets/issues/7680
3,224,824,151
I_kwDODunzps7ANulX
7,680
Question about iterable dataset and streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/73541181?v=4", "events_url": "https://api.github.com/users/Tavish9/events{/privacy}", "followers_url": "https://api.github.com/users/Tavish9/followers", "following_url": "https://api.github.com/users/Tavish9/following{/other_user}", "gists_url": "https://api.github.com/users/Tavish9/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Tavish9", "id": 73541181, "login": "Tavish9", "node_id": "MDQ6VXNlcjczNTQxMTgx", "organizations_url": "https://api.github.com/users/Tavish9/orgs", "received_events_url": "https://api.github.com/users/Tavish9/received_events", "repos_url": "https://api.github.com/users/Tavish9/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Tavish9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tavish9/subscriptions", "type": "User", "url": "https://api.github.com/users/Tavish9", "user_view_type": "public" }
[]
open
false
null
[]
null
8
2025-07-12 04:48:30+00:00
2025-08-01 13:01:48+00:00
NaT
NONE
null
null
null
null
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78 I am confused, 1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset? 2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7680/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7680/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7679/comments
https://api.github.com/repos/huggingface/datasets/issues/7679/events
https://github.com/huggingface/datasets/issues/7679
3,220,787,371
I_kwDODunzps6_-VCr
7,679
metric glue breaks with 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-10 21:39:50+00:00
2025-07-11 17:42:01+00:00
2025-07-11 17:42:01+00:00
CONTRIBUTOR
null
null
null
null
### Describe the bug worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks. The code that fails is: https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84 ``` def simple_accuracy(preds, labels): print(preds, labels) print(f"{preds==labels}") return float((preds == labels).mean()) ``` data: ``` Column([1, 0, 0, 1, 1]) Column([1, 0, 0, 1, 0]) False ``` ``` [rank0]: return float((preds == labels).mean()) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^ [rank0]: AttributeError: 'bool' object has no attribute 'mean' ``` Some behavior has changed in this new major release of `datasets` and requires updating HF accelerate and perhaps the glue metric code, all belong to HF. ### Environment info datasets=4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7679/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7679/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7678/comments
https://api.github.com/repos/huggingface/datasets/issues/7678/events
https://github.com/huggingface/datasets/issues/7678
3,218,625,544
I_kwDODunzps6_2FQI
7,678
To support decoding audio data, please install 'torchcodec'.
{ "avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4", "events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}", "followers_url": "https://api.github.com/users/alpcansoydas/followers", "following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}", "gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alpcansoydas", "id": 48163702, "login": "alpcansoydas", "node_id": "MDQ6VXNlcjQ4MTYzNzAy", "organizations_url": "https://api.github.com/users/alpcansoydas/orgs", "received_events_url": "https://api.github.com/users/alpcansoydas/received_events", "repos_url": "https://api.github.com/users/alpcansoydas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions", "type": "User", "url": "https://api.github.com/users/alpcansoydas", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-10 09:43:13+00:00
2025-07-22 03:46:52+00:00
2025-07-11 05:05:42+00:00
NONE
null
null
null
null
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version. !pip install -q -U datasets huggingface_hub fsspec from datasets import load_dataset downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train") print(downloaded_dataset["audio"][0]) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) [/tmp/ipython-input-4-90623240.py](https://localhost:8080/#) in <cell line: 0>() ----> 1 downloaded_dataset["audio"][0] 10 frames [/usr/local/lib/python3.11/dist-packages/datasets/features/audio.py](https://localhost:8080/#) in decode_example(self, value, token_per_repo_id) 170 from ._torchcodec import AudioDecoder 171 else: --> 172 raise ImportError("To support decoding audio data, please install 'torchcodec'.") 173 174 if not self.decode: ImportError: To support decoding audio data, please install 'torchcodec'. ### Environment info - `datasets` version: 4.0.0 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.13 - `huggingface_hub` version: 0.33.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/48163702?v=4", "events_url": "https://api.github.com/users/alpcansoydas/events{/privacy}", "followers_url": "https://api.github.com/users/alpcansoydas/followers", "following_url": "https://api.github.com/users/alpcansoydas/following{/other_user}", "gists_url": "https://api.github.com/users/alpcansoydas/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alpcansoydas", "id": 48163702, "login": "alpcansoydas", "node_id": "MDQ6VXNlcjQ4MTYzNzAy", "organizations_url": "https://api.github.com/users/alpcansoydas/orgs", "received_events_url": "https://api.github.com/users/alpcansoydas/received_events", "repos_url": "https://api.github.com/users/alpcansoydas/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alpcansoydas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alpcansoydas/subscriptions", "type": "User", "url": "https://api.github.com/users/alpcansoydas", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7678/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7678/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7677/comments
https://api.github.com/repos/huggingface/datasets/issues/7677/events
https://github.com/huggingface/datasets/issues/7677
3,218,044,656
I_kwDODunzps6_z3bw
7,677
Toxicity fails with datasets 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4", "events_url": "https://api.github.com/users/serena-ruan/events{/privacy}", "followers_url": "https://api.github.com/users/serena-ruan/followers", "following_url": "https://api.github.com/users/serena-ruan/following{/other_user}", "gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/serena-ruan", "id": 82044803, "login": "serena-ruan", "node_id": "MDQ6VXNlcjgyMDQ0ODAz", "organizations_url": "https://api.github.com/users/serena-ruan/orgs", "received_events_url": "https://api.github.com/users/serena-ruan/received_events", "repos_url": "https://api.github.com/users/serena-ruan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions", "type": "User", "url": "https://api.github.com/users/serena-ruan", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-10 06:15:22+00:00
2025-07-11 04:40:59+00:00
2025-07-11 04:40:59+00:00
NONE
null
null
null
null
### Describe the bug With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).` ### Steps to reproduce the bug Repro: ``` >>> toxicity.compute(predictions=["This is a response"]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/evaluate/module.py", line 467, in compute output = self._compute(**inputs, **compute_kwargs) File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 135, in _compute scores = toxicity(predictions, self.toxic_classifier, toxic_label) File "/Users/serena.ruan/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-measurement--toxicity/2390290fa0bf6d78480143547c6b08f3d4f8805b249df8c7a8e80d0ce8e3778b/toxicity.py", line 103, in toxicity for pred_toxic in toxic_classifier(preds): File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 159, in __call__ result = super().__call__(*inputs, **kwargs) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1431, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1437, in run_single model_inputs = self.preprocess(inputs, **preprocess_params) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/pipelines/text_classification.py", line 183, in preprocess return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2867, in __call__ encodings = self._call_one(text=text, text_pair=text_pair, **all_kwargs) File "/Users/serena.ruan/miniconda3/envs/mlflow-310/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2927, in _call_one raise ValueError( ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). ``` ### Expected behavior This works before 4.0.0 release ### Environment info - `datasets` version: 4.0.0 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.10.16 - `huggingface_hub` version: 0.33.0 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/82044803?v=4", "events_url": "https://api.github.com/users/serena-ruan/events{/privacy}", "followers_url": "https://api.github.com/users/serena-ruan/followers", "following_url": "https://api.github.com/users/serena-ruan/following{/other_user}", "gists_url": "https://api.github.com/users/serena-ruan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/serena-ruan", "id": 82044803, "login": "serena-ruan", "node_id": "MDQ6VXNlcjgyMDQ0ODAz", "organizations_url": "https://api.github.com/users/serena-ruan/orgs", "received_events_url": "https://api.github.com/users/serena-ruan/received_events", "repos_url": "https://api.github.com/users/serena-ruan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/serena-ruan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serena-ruan/subscriptions", "type": "User", "url": "https://api.github.com/users/serena-ruan", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7677/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7677/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7676/comments
https://api.github.com/repos/huggingface/datasets/issues/7676/events
https://github.com/huggingface/datasets/issues/7676
3,216,857,559
I_kwDODunzps6_vVnX
7,676
Many things broken since the new 4.0.0 release
{ "avatar_url": "https://avatars.githubusercontent.com/u/37179323?v=4", "events_url": "https://api.github.com/users/mobicham/events{/privacy}", "followers_url": "https://api.github.com/users/mobicham/followers", "following_url": "https://api.github.com/users/mobicham/following{/other_user}", "gists_url": "https://api.github.com/users/mobicham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mobicham", "id": 37179323, "login": "mobicham", "node_id": "MDQ6VXNlcjM3MTc5MzIz", "organizations_url": "https://api.github.com/users/mobicham/orgs", "received_events_url": "https://api.github.com/users/mobicham/received_events", "repos_url": "https://api.github.com/users/mobicham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mobicham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mobicham/subscriptions", "type": "User", "url": "https://api.github.com/users/mobicham", "user_view_type": "public" }
[]
open
false
null
[]
null
15
2025-07-09 18:59:50+00:00
2025-09-18 16:33:34+00:00
NaT
NONE
null
null
null
null
### Describe the bug The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness. I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting: ``` Python File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in generate_from_dict(obj) 1471 class_type = _FEATURE_TYPES.get(_type, None) or globals().get(_type, None) 1473 if class_type is None: -> 1474 raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}") 1476 if class_type == LargeList: 1477 feature = obj.pop("feature") ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf'] ``` ### Steps to reproduce the bug ``` Python import lm_eval model_eval = lm_eval.models.huggingface.HFLM(pretrained=model, tokenizer=tokenizer) lm_eval.evaluator.simple_evaluate(model_eval, tasks=["winogrande"], num_fewshot=5, batch_size=1) ``` ### Expected behavior Older `datasets` versions should work just fine as before ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39 - Python version: 3.12.11 - `huggingface_hub` version: 0.33.1 - PyArrow version: 20.0.0 - Pandas version: 2.3.1 - `fsspec` version: 2025.3.0
null
{ "+1": 21, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 21, "url": "https://api.github.com/repos/huggingface/datasets/issues/7676/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7676/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7675/comments
https://api.github.com/repos/huggingface/datasets/issues/7675/events
https://github.com/huggingface/datasets/issues/7675
3,216,699,094
I_kwDODunzps6_uu7W
7,675
common_voice_11_0.py failure in dataset library
{ "avatar_url": "https://avatars.githubusercontent.com/u/98793855?v=4", "events_url": "https://api.github.com/users/egegurel/events{/privacy}", "followers_url": "https://api.github.com/users/egegurel/followers", "following_url": "https://api.github.com/users/egegurel/following{/other_user}", "gists_url": "https://api.github.com/users/egegurel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/egegurel", "id": 98793855, "login": "egegurel", "node_id": "U_kgDOBeN5fw", "organizations_url": "https://api.github.com/users/egegurel/orgs", "received_events_url": "https://api.github.com/users/egegurel/received_events", "repos_url": "https://api.github.com/users/egegurel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/egegurel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egegurel/subscriptions", "type": "User", "url": "https://api.github.com/users/egegurel", "user_view_type": "public" }
[]
open
false
null
[]
null
5
2025-07-09 17:47:59+00:00
2025-07-22 09:35:42+00:00
NaT
NONE
null
null
null
null
### Describe the bug I tried to download dataset but have got this error: from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[8], line 4 1 from datasets import load_dataset ----> 4 load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1392, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs) 1387 verification_mode = VerificationMode( 1388 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 1389 ) 1391 # Create a dataset builder -> 1392 builder_instance = load_dataset_builder( 1393 path=path, 1394 name=name, 1395 data_dir=data_dir, 1396 data_files=data_files, 1397 cache_dir=cache_dir, 1398 features=features, 1399 download_config=download_config, 1400 download_mode=download_mode, 1401 revision=revision, 1402 token=token, 1403 storage_options=storage_options, 1404 **config_kwargs, 1405 ) 1407 # Return iterable dataset in case of streaming 1408 if streaming: File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1132, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, **config_kwargs) 1130 if features is not None: 1131 features = _fix_for_backward_compatible_features(features) -> 1132 dataset_module = dataset_module_factory( 1133 path, 1134 revision=revision, 1135 download_config=download_config, 1136 download_mode=download_mode, 1137 data_dir=data_dir, 1138 data_files=data_files, 1139 cache_dir=cache_dir, 1140 ) 1141 # Get dataset builder class 1142 builder_kwargs = dataset_module.builder_kwargs File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:1031, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 1026 if isinstance(e1, FileNotFoundError): 1027 raise FileNotFoundError( 1028 f"Couldn't find any data file at {relative_to_absolute_path(path)}. " 1029 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1030 ) from None -> 1031 raise e1 from None 1032 else: 1033 raise FileNotFoundError(f"Couldn't find any data file at {relative_to_absolute_path(path)}.") File c:\Users\ege_g\AppData\Local\Programs\Python\Python312\Lib\site-packages\datasets\load.py:989, in dataset_module_factory(path, revision, download_config, download_mode, data_dir, data_files, cache_dir, **download_kwargs) 981 try: 982 api.hf_hub_download( 983 repo_id=path, 984 filename=filename, (...) 987 proxies=download_config.proxies, 988 ) --> 989 raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}") 990 except EntryNotFoundError: 991 # Use the infos from the parquet export except in some cases: 992 if data_dir or data_files or (revision and revision != "main"): RuntimeError: Dataset scripts are no longer supported, but found common_voice_11_0.py ### Steps to reproduce the bug from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) ### Expected behavior its supposed to download this dataset. ### Environment info Python 3.12 , Windows 11
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7675/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7675/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7674/comments
https://api.github.com/repos/huggingface/datasets/issues/7674/events
https://github.com/huggingface/datasets/pull/7674
3,216,251,069
PR_kwDODunzps6eJGo5
7,674
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-09 15:01:25+00:00
2025-07-09 15:04:01+00:00
2025-07-09 15:01:33+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7674.diff", "html_url": "https://github.com/huggingface/datasets/pull/7674", "merged_at": "2025-07-09T15:01:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7674.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7674" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7674/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7674/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7673/comments
https://api.github.com/repos/huggingface/datasets/issues/7673/events
https://github.com/huggingface/datasets/pull/7673
3,216,075,633
PR_kwDODunzps6eIgj-
7,673
Release: 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-09 14:03:16+00:00
2025-07-09 14:36:19+00:00
2025-07-09 14:36:18+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7673.diff", "html_url": "https://github.com/huggingface/datasets/pull/7673", "merged_at": "2025-07-09T14:36:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/7673.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7673" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7673/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7673/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7672/comments
https://api.github.com/repos/huggingface/datasets/issues/7672/events
https://github.com/huggingface/datasets/pull/7672
3,215,287,164
PR_kwDODunzps6eF1vj
7,672
Fix double sequence
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-09 09:53:39+00:00
2025-07-09 09:56:29+00:00
2025-07-09 09:56:28+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7672.diff", "html_url": "https://github.com/huggingface/datasets/pull/7672", "merged_at": "2025-07-09T09:56:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/7672.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7672" }
```python >>> Features({"a": Sequence(Sequence({"c": Value("int64")}))}) {'a': List({'c': List(Value('int64'))})} ``` instead of `{'a': {'c': List(List(Value('int64')))}}`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7672/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7672/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7671/comments
https://api.github.com/repos/huggingface/datasets/issues/7671/events
https://github.com/huggingface/datasets/issues/7671
3,213,223,886
I_kwDODunzps6_hefO
7,671
Mapping function not working if the first example is returned as None
{ "avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4", "events_url": "https://api.github.com/users/dnaihao/events{/privacy}", "followers_url": "https://api.github.com/users/dnaihao/followers", "following_url": "https://api.github.com/users/dnaihao/following{/other_user}", "gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaihao", "id": 46325823, "login": "dnaihao", "node_id": "MDQ6VXNlcjQ2MzI1ODIz", "organizations_url": "https://api.github.com/users/dnaihao/orgs", "received_events_url": "https://api.github.com/users/dnaihao/received_events", "repos_url": "https://api.github.com/users/dnaihao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaihao", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-07-08 17:07:47+00:00
2025-07-09 12:30:32+00:00
2025-07-09 12:30:32+00:00
NONE
null
null
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37 Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length constraints, etc). In this case, the writer would be a `None` type and the code will report `NoneType has no write function`. A simple fix is available, simply change line 3652 from `if i == 0:` to `if writer is None:` ### Steps to reproduce the bug Prepare a dataset have this function ``` import datasets def make_map_fn(split, max_prompt_tokens=3): def process_fn(example, idx): question = example['question'] reasoning_steps = example['reasoning_steps'] label = example['label'] answer_format = "" for i in range(len(reasoning_steps)): system_message = "Dummy" all_steps_formatted = [] content = f"""Dummy""" prompt = [ {"role": "system", "content": system_message}, {"role": "user", "content": content}, ] tokenized = tokenizer.apply_chat_template(prompt, return_tensors="pt", truncation=False) if tokenized.shape[1] > max_prompt_tokens: return None # skip overly long examples data = { "dummy": "dummy" } return data return process_fn ... # load your dataset ... train = train.map(function=make_map_fn('train'), with_indices=True) ``` ### Expected behavior The dataset mapping shall behave even when the first example is filtered out. ### Environment info I am using `datasets==3.6.0` but I have observed this issue in the github repo too: https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
{ "avatar_url": "https://avatars.githubusercontent.com/u/46325823?v=4", "events_url": "https://api.github.com/users/dnaihao/events{/privacy}", "followers_url": "https://api.github.com/users/dnaihao/followers", "following_url": "https://api.github.com/users/dnaihao/following{/other_user}", "gists_url": "https://api.github.com/users/dnaihao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaihao", "id": 46325823, "login": "dnaihao", "node_id": "MDQ6VXNlcjQ2MzI1ODIz", "organizations_url": "https://api.github.com/users/dnaihao/orgs", "received_events_url": "https://api.github.com/users/dnaihao/received_events", "repos_url": "https://api.github.com/users/dnaihao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaihao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaihao/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaihao", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7671/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7670
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7670/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7670/comments
https://api.github.com/repos/huggingface/datasets/issues/7670/events
https://github.com/huggingface/datasets/pull/7670
3,208,962,372
PR_kwDODunzps6dwgOc
7,670
Fix audio bytes
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-07 13:05:15+00:00
2025-07-07 13:07:47+00:00
2025-07-07 13:05:33+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7670.diff", "html_url": "https://github.com/huggingface/datasets/pull/7670", "merged_at": "2025-07-07T13:05:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7670.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7670" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7670/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7670/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7669/comments
https://api.github.com/repos/huggingface/datasets/issues/7669/events
https://github.com/huggingface/datasets/issues/7669
3,203,541,091
I_kwDODunzps6-8ihj
7,669
How can I add my custom data to huggingface datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/219205504?v=4", "events_url": "https://api.github.com/users/xiagod/events{/privacy}", "followers_url": "https://api.github.com/users/xiagod/followers", "following_url": "https://api.github.com/users/xiagod/following{/other_user}", "gists_url": "https://api.github.com/users/xiagod/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiagod", "id": 219205504, "login": "xiagod", "node_id": "U_kgDODRDPgA", "organizations_url": "https://api.github.com/users/xiagod/orgs", "received_events_url": "https://api.github.com/users/xiagod/received_events", "repos_url": "https://api.github.com/users/xiagod/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiagod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiagod/subscriptions", "type": "User", "url": "https://api.github.com/users/xiagod", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-04 19:19:54+00:00
2025-07-05 18:19:37+00:00
NaT
NONE
null
null
null
null
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7669/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7668/comments
https://api.github.com/repos/huggingface/datasets/issues/7668/events
https://github.com/huggingface/datasets/issues/7668
3,199,039,322
I_kwDODunzps6-rXda
7,668
Broken EXIF crash the whole program
{ "avatar_url": "https://avatars.githubusercontent.com/u/30485844?v=4", "events_url": "https://api.github.com/users/Seas0/events{/privacy}", "followers_url": "https://api.github.com/users/Seas0/followers", "following_url": "https://api.github.com/users/Seas0/following{/other_user}", "gists_url": "https://api.github.com/users/Seas0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Seas0", "id": 30485844, "login": "Seas0", "node_id": "MDQ6VXNlcjMwNDg1ODQ0", "organizations_url": "https://api.github.com/users/Seas0/orgs", "received_events_url": "https://api.github.com/users/Seas0/received_events", "repos_url": "https://api.github.com/users/Seas0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Seas0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Seas0/subscriptions", "type": "User", "url": "https://api.github.com/users/Seas0", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-03 11:24:15+00:00
2025-07-03 12:27:16+00:00
NaT
NONE
null
null
null
null
### Describe the bug When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag. ![Image](https://github.com/user-attachments/assets/3c840203-ac8c-41a0-9cf7-45f64488037d) ### Steps to reproduce the bug Use the `datasets.Image.decode_example` method to decode the aforementioned image could reproduce the bug. The decoding function will throw an unhandled exception at the `image.getexif()` method call due to invalid utf-8 stream in EXIF tags. ``` File lib/python3.12/site-packages/datasets/features/image.py:188, in Image.decode_example(self, value, token_per_repo_id) 186 image = PIL.Image.open(BytesIO(bytes_)) 187 image.load() # to avoid "Too many open files" errors --> 188 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None: 189 image = PIL.ImageOps.exif_transpose(image) 190 if self.mode and self.mode != image.mode: File lib/python3.12/site-packages/PIL/Image.py:1542, in Image.getexif(self) 1540 xmp_tags = self.info.get("XML:com.adobe.xmp") 1541 if not xmp_tags and (xmp_tags := self.info.get("xmp")): -> 1542 xmp_tags = xmp_tags.decode("utf-8") 1543 if xmp_tags: 1544 match = re.search(r'tiff:Orientation(="|>)([0-9])', xmp_tags) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 4312: invalid start byte ``` ### Expected behavior The invalid EXIF tag should simply be ignored or issue a warning message, instead of crash the whole program at once. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35 - Python version: 3.12.11 - `huggingface_hub` version: 0.33.0 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2025.3.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7668/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7668/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7667/comments
https://api.github.com/repos/huggingface/datasets/issues/7667/events
https://github.com/huggingface/datasets/pull/7667
3,196,251,707
PR_kwDODunzps6dGmm8
7,667
Fix infer list of images
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-02 15:07:58+00:00
2025-07-02 15:10:28+00:00
2025-07-02 15:08:03+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7667.diff", "html_url": "https://github.com/huggingface/datasets/pull/7667", "merged_at": "2025-07-02T15:08:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/7667.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7667" }
cc @kashif
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7667/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7667/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7666/comments
https://api.github.com/repos/huggingface/datasets/issues/7666/events
https://github.com/huggingface/datasets/pull/7666
3,196,220,722
PR_kwDODunzps6dGf7E
7,666
Backward compat list feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-02 14:58:00+00:00
2025-07-02 15:00:37+00:00
2025-07-02 14:59:40+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7666.diff", "html_url": "https://github.com/huggingface/datasets/pull/7666", "merged_at": "2025-07-02T14:59:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/7666.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7666" }
cc @kashif
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7666/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7666/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7665/comments
https://api.github.com/repos/huggingface/datasets/issues/7665/events
https://github.com/huggingface/datasets/issues/7665
3,193,239,955
I_kwDODunzps6-VPmT
7,665
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4", "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}", "followers_url": "https://api.github.com/users/zdzichukowalski/followers", "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}", "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zdzichukowalski", "id": 1151198, "login": "zdzichukowalski", "node_id": "MDQ6VXNlcjExNTExOTg=", "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs", "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events", "repos_url": "https://api.github.com/users/zdzichukowalski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions", "type": "User", "url": "https://api.github.com/users/zdzichukowalski", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-01 17:14:53+00:00
2025-07-01 17:17:48+00:00
2025-07-01 17:17:48+00:00
NONE
null
null
null
null
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4" ``` As a result, I got an exception ``` "TypeError: Couldn't cast array of type timestamp[s] to null". ``` Full stack-trace in the attached file below. I also attach a minimized dataset (data.json, a single entry) that reproduces the error. **Observations**(on the minimal example): - if I remove _all fields before_ `body`, a different error appears, - if I remove _all fields after_ `body`, yet another error appears, - if `body` is _the only field_, the error disappears. So this might be one complex bug or several edge cases interacting. I haven’t dug deeper. Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet. Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong. [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt) [data.json](https://github.com/user-attachments/files/21004164/data.json) P.S. I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt).I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts. ### Steps to reproduce the bug 1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file. 2. Run the following code which should work correctly: ``` from datasets import load_dataset load_dataset("json", data_files="data.json", split="train") ``` 3. Change extension of the `data` file to `.jsonl` and run: ``` from datasets import load_dataset load_dataset("json", data_files="data.jsonl", split="train") ``` This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt). One can also try removing fields before the `body` field and after it. These actions give different errors. ### Expected behavior Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema. ### Environment info datasets version: _3.6.0_ pyarrow version: _20.0.0_ Python version: _3.11.9_ platform version: _macOS-15.5-arm64-arm-64bit_
{ "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4", "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}", "followers_url": "https://api.github.com/users/zdzichukowalski/followers", "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}", "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zdzichukowalski", "id": 1151198, "login": "zdzichukowalski", "node_id": "MDQ6VXNlcjExNTExOTg=", "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs", "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events", "repos_url": "https://api.github.com/users/zdzichukowalski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions", "type": "User", "url": "https://api.github.com/users/zdzichukowalski", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7665/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7665/timeline
null
duplicate
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7664/comments
https://api.github.com/repos/huggingface/datasets/issues/7664/events
https://github.com/huggingface/datasets/issues/7664
3,193,239,035
I_kwDODunzps6-VPX7
7,664
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1151198?v=4", "events_url": "https://api.github.com/users/zdzichukowalski/events{/privacy}", "followers_url": "https://api.github.com/users/zdzichukowalski/followers", "following_url": "https://api.github.com/users/zdzichukowalski/following{/other_user}", "gists_url": "https://api.github.com/users/zdzichukowalski/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zdzichukowalski", "id": 1151198, "login": "zdzichukowalski", "node_id": "MDQ6VXNlcjExNTExOTg=", "organizations_url": "https://api.github.com/users/zdzichukowalski/orgs", "received_events_url": "https://api.github.com/users/zdzichukowalski/received_events", "repos_url": "https://api.github.com/users/zdzichukowalski/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zdzichukowalski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zdzichukowalski/subscriptions", "type": "User", "url": "https://api.github.com/users/zdzichukowalski", "user_view_type": "public" }
[]
open
false
null
[]
null
6
2025-07-01 17:14:32+00:00
2025-07-09 13:14:11+00:00
NaT
NONE
null
null
null
null
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action: string, datetime: timestamp[s], author: string, (...) Pandas version: 1.3.4" ``` As a result, I got an exception ``` "TypeError: Couldn't cast array of type timestamp[s] to null". ``` Full stack-trace in the attached file below. I also attach a minimized dataset (data.json, a single entry) that reproduces the error. **Observations**(on the minimal example): - if I remove _all fields before_ `body`, a different error appears, - if I remove _all fields after_ `body`, yet another error appears, - if `body` is _the only field_, the error disappears. So this might be one complex bug or several edge cases interacting. I haven’t dug deeper. Also changing the file extension to `.json` or `.txt` avoids the problem. This suggests **a possible workaround** for the general case: convert `.jsonl` to `.json`. Though I haven’t verified correctness of that workaround yet. Anyway my understanding is that `load_dataset` with first argument set to "json" should properly handle `.jsonl` files. Correct me if I'm wrong. [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt) [data.json](https://github.com/user-attachments/files/21004164/data.json) P.S. I discovered this while going through the HuggingFace tutorial. Specifically [this part](https://huggingface.co/learn/llm-course/chapter5/5?fw=pt). I will try to inform the tutorial team about this issue, as it can be a showstopper for young 🤗 adepts. ### Steps to reproduce the bug 1. Download attached [data.json](https://github.com/user-attachments/files/21004164/data.json) file. 2. Run the following code which should work correctly: ``` from datasets import load_dataset load_dataset("json", data_files="data.json", split="train") ``` 3. Change extension of the `data` file to `.jsonl` and run: ``` from datasets import load_dataset load_dataset("json", data_files="data.jsonl", split="train") ``` This will trigger an error like the one in the attached [stack_trace.txt](https://github.com/user-attachments/files/21004153/stack_trace.txt). One can also try removing fields before the `body` field and after it. These actions give different errors. ### Expected behavior Parsing data in `.jsonl` format should yield the same result as parsing the same data in `.json` format. In any case, the content of a string field should never be interpreted as part of the dataset schema. ### Environment info datasets version: _3.6.0_ pyarrow version: _20.0.0_ Python version: _3.11.9_ platform version: _macOS-15.5-arm64-arm-64bit_
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7664/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7664/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7663/comments
https://api.github.com/repos/huggingface/datasets/issues/7663/events
https://github.com/huggingface/datasets/pull/7663
3,192,582,371
PR_kwDODunzps6c6aJF
7,663
Custom metadata filenames
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-07-01 13:50:36+00:00
2025-07-01 13:58:41+00:00
2025-07-01 13:58:39+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7663.diff", "html_url": "https://github.com/huggingface/datasets/pull/7663", "merged_at": "2025-07-01T13:58:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/7663.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7663" }
example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main To make multiple subsets for an imagefolder (one metadata file per subset), e.g. ```yaml configs: - config_name: default metadata_filenames: - metadata.csv - config_name: other metadata_filenames: - metadata2.csv ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7663/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7662/comments
https://api.github.com/repos/huggingface/datasets/issues/7662/events
https://github.com/huggingface/datasets/issues/7662
3,190,805,531
I_kwDODunzps6-L9Qb
7,662
Applying map after transform with multiprocessing will cause OOM
{ "avatar_url": "https://avatars.githubusercontent.com/u/26482910?v=4", "events_url": "https://api.github.com/users/JunjieLl/events{/privacy}", "followers_url": "https://api.github.com/users/JunjieLl/followers", "following_url": "https://api.github.com/users/JunjieLl/following{/other_user}", "gists_url": "https://api.github.com/users/JunjieLl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JunjieLl", "id": 26482910, "login": "JunjieLl", "node_id": "MDQ6VXNlcjI2NDgyOTEw", "organizations_url": "https://api.github.com/users/JunjieLl/orgs", "received_events_url": "https://api.github.com/users/JunjieLl/received_events", "repos_url": "https://api.github.com/users/JunjieLl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JunjieLl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JunjieLl/subscriptions", "type": "User", "url": "https://api.github.com/users/JunjieLl", "user_view_type": "public" }
[]
open
false
null
[]
null
5
2025-07-01 05:45:57+00:00
2025-07-10 06:17:40+00:00
NaT
NONE
null
null
null
null
### Describe the bug I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I found that the OOM is caused at this point, and I suspect it’s because the add_column and cast_column operations are not cached, which causes the entire dataset to be loaded in each subprocess, leading to the OOM. The critical line of code is: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/py_utils.py#L607 Note num_process=1 would not cause OOM. I'm confused. ### Steps to reproduce the bug For reproduce, you can load dataset and set cache_dir (for caching): amphion/Emilia-Dataset which is a veru large datasets that RAM can not fits. And apply the map with multiprocessing after a transform operation (e.g. add_column, cast_column). As long as num_process>1, it must cause OOM. ### Expected behavior It should not cause OOM. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.10.134-16.101.al8.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.33.1 - PyArrow version: 20.0.0 - Pandas version: 2.3.0 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7662/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7662/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7661/comments
https://api.github.com/repos/huggingface/datasets/issues/7661/events
https://github.com/huggingface/datasets/pull/7661
3,190,408,237
PR_kwDODunzps6czBDi
7,661
fix del tqdm lock error
{ "avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4", "events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}", "followers_url": "https://api.github.com/users/Hypothesis-Z/followers", "following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}", "gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hypothesis-Z", "id": 44766273, "login": "Hypothesis-Z", "node_id": "MDQ6VXNlcjQ0NzY2Mjcz", "organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs", "received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events", "repos_url": "https://api.github.com/users/Hypothesis-Z/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions", "type": "User", "url": "https://api.github.com/users/Hypothesis-Z", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-07-01 02:04:02+00:00
2025-08-13 13:16:44+00:00
NaT
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7661.diff", "html_url": "https://github.com/huggingface/datasets/pull/7661", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7661.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7661" }
fixes https://github.com/huggingface/datasets/issues/7660
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7661/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7661/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7660/comments
https://api.github.com/repos/huggingface/datasets/issues/7660/events
https://github.com/huggingface/datasets/issues/7660
3,189,028,251
I_kwDODunzps6-FLWb
7,660
AttributeError: type object 'tqdm' has no attribute '_lock'
{ "avatar_url": "https://avatars.githubusercontent.com/u/44766273?v=4", "events_url": "https://api.github.com/users/Hypothesis-Z/events{/privacy}", "followers_url": "https://api.github.com/users/Hypothesis-Z/followers", "following_url": "https://api.github.com/users/Hypothesis-Z/following{/other_user}", "gists_url": "https://api.github.com/users/Hypothesis-Z/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hypothesis-Z", "id": 44766273, "login": "Hypothesis-Z", "node_id": "MDQ6VXNlcjQ0NzY2Mjcz", "organizations_url": "https://api.github.com/users/Hypothesis-Z/orgs", "received_events_url": "https://api.github.com/users/Hypothesis-Z/received_events", "repos_url": "https://api.github.com/users/Hypothesis-Z/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hypothesis-Z/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hypothesis-Z/subscriptions", "type": "User", "url": "https://api.github.com/users/Hypothesis-Z", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2025-06-30 15:57:16+00:00
2025-07-03 15:14:27+00:00
NaT
NONE
null
null
null
null
### Describe the bug `AttributeError: type object 'tqdm' has no attribute '_lock'` It occurs when I'm trying to load datasets in thread pool. Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to fix this. ### Steps to reproduce the bug Will have to try several times to reproduce the error because it is concerned with threads. 1. save some datasets for test ```pythonfrom datasets import Dataset, DatasetDict import os os.makedirs("test_dataset_shards", exist_ok=True) for i in range(10): data = Dataset.from_dict({"text": [f"example {j}" for j in range(1000000)]}) data = DatasetDict({'train': data}) data.save_to_disk(f"test_dataset_shards/shard_{i}") ``` 2. load them in a thread pool ```python from datasets import load_from_disk from tqdm import tqdm from concurrent.futures import ThreadPoolExecutor, as_completed import glob datas = glob.glob('test_dataset_shards/shard_*') with ThreadPoolExecutor(max_workers=10) as pool: futures = [pool.submit(load_from_disk, it) for it in datas] datas = [] for future in tqdm(as_completed(futures), total=len(futures)): datas.append(future.result()) ``` ### Expected behavior no exception raised ### Environment info datasets==2.19.0 python==3.10
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7660/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7660/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7659/comments
https://api.github.com/repos/huggingface/datasets/issues/7659/events
https://github.com/huggingface/datasets/pull/7659
3,187,882,217
PR_kwDODunzps6cqkou
7,659
Update the beans dataset link in Preprocess
{ "avatar_url": "https://avatars.githubusercontent.com/u/5434867?v=4", "events_url": "https://api.github.com/users/HJassar/events{/privacy}", "followers_url": "https://api.github.com/users/HJassar/followers", "following_url": "https://api.github.com/users/HJassar/following{/other_user}", "gists_url": "https://api.github.com/users/HJassar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/HJassar", "id": 5434867, "login": "HJassar", "node_id": "MDQ6VXNlcjU0MzQ4Njc=", "organizations_url": "https://api.github.com/users/HJassar/orgs", "received_events_url": "https://api.github.com/users/HJassar/received_events", "repos_url": "https://api.github.com/users/HJassar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/HJassar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HJassar/subscriptions", "type": "User", "url": "https://api.github.com/users/HJassar", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2025-06-30 09:58:44+00:00
2025-07-07 08:38:19+00:00
2025-07-01 14:01:42+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7659.diff", "html_url": "https://github.com/huggingface/datasets/pull/7659", "merged_at": "2025-07-01T14:01:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/7659.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7659" }
In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7659/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7659/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7658/comments
https://api.github.com/repos/huggingface/datasets/issues/7658/events
https://github.com/huggingface/datasets/pull/7658
3,187,800,504
PR_kwDODunzps6cqTMs
7,658
Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
5
2025-06-30 09:31:12+00:00
2025-07-01 16:26:30+00:00
2025-07-01 16:26:12+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7658.diff", "html_url": "https://github.com/huggingface/datasets/pull/7658", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7658.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7658" }
This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_names`. Why Previously, the code would always set `info.features = features`, even if `features` was `None`. When mapping with removal of columns or other transformations, this led to the destruction of the schema and caused failures in code that relied on the dataset schema being present. How We now update `info.features` only if `features` is not `None`. This preserves the original schema unless the user explicitly provides a new one. Reference Fixes #7568
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7658/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7658/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7657/comments
https://api.github.com/repos/huggingface/datasets/issues/7657/events
https://github.com/huggingface/datasets/pull/7657
3,186,036,016
PR_kwDODunzps6cks2E
7,657
feat: add subset_name as alias for name in load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-29 10:39:00+00:00
2025-07-18 17:45:41+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7657.diff", "html_url": "https://github.com/huggingface/datasets/pull/7657", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7657.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7657" }
fixes #7637 This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users. Supports `subset_name` in `load_dataset()` Adds `.subset_name` property to DatasetBuilder Maintains full backward compatibility Raises clear error if name and `subset_name` conflict
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7657/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7657/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7656/comments
https://api.github.com/repos/huggingface/datasets/issues/7656/events
https://github.com/huggingface/datasets/pull/7656
3,185,865,686
PR_kwDODunzps6ckPHc
7,656
fix(iterable): ensure MappedExamplesIterable supports state_dict for resume
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-29 07:50:13+00:00
2025-06-29 07:50:13+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7656.diff", "html_url": "https://github.com/huggingface/datasets/pull/7656", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7656.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7656" }
Fixes #7630 ### Problem When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable. ### What This PR Does This patch adds: ```python def state_dict(self): return self.ex_iterable.state_dict() def load_state_dict(self, state): self.ex_iterable.load_state_dict(state) ``` to MappedExamplesIterable, so the wrapped base iterable's state can be saved and restored as expected. Result Using .map() no longer causes sample skipping after checkpoint resume. Let me know if a dedicated test case is required — happy to add one!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7656/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7656/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7655/comments
https://api.github.com/repos/huggingface/datasets/issues/7655/events
https://github.com/huggingface/datasets/pull/7655
3,185,382,105
PR_kwDODunzps6ci9oi
7,655
Added specific use cases in Improve Performace
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-28 19:00:32+00:00
2025-06-28 19:00:32+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7655.diff", "html_url": "https://github.com/huggingface/datasets/pull/7655", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7655.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7655" }
Fixes #2494
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7655/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7655/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7654/comments
https://api.github.com/repos/huggingface/datasets/issues/7654/events
https://github.com/huggingface/datasets/pull/7654
3,184,770,992
PR_kwDODunzps6chPmz
7,654
fix(load): strip deprecated use_auth_token from config_kwargs
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-28 09:20:21+00:00
2025-06-28 09:20:21+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7654.diff", "html_url": "https://github.com/huggingface/datasets/pull/7654", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7654.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7654" }
Fixes #7504 This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`. **What was happening:** Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. **Why:** `use_auth_token` has been deprecated and removed from config definitions (replaced by `token`), but the `load_dataset()` function still forwarded it via `**config_kwargs` to BuilderConfigs, leading to unrecognized key errors. **Fix:** We now intercept and strip `use_auth_token` from `config_kwargs` inside `load_dataset`, replacing it with a warning: ```python if "use_auth_token" in config_kwargs: logger.warning("The 'use_auth_token' argument is deprecated. Please use 'token' instead.") config_kwargs.pop("use_auth_token") ``` This ensures legacy compatibility while guiding users to switch to the token argument. Let me know if you'd prefer a deprecation error instead of a warning. Thanks!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7654/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7654/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7653/comments
https://api.github.com/repos/huggingface/datasets/issues/7653/events
https://github.com/huggingface/datasets/pull/7653
3,184,746,093
PR_kwDODunzps6chLmb
7,653
feat(load): fallback to `load_from_disk()` when loading a saved dataset directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-28 08:47:36+00:00
2025-06-28 08:47:36+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7653.diff", "html_url": "https://github.com/huggingface/datasets/pull/7653", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7653.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7653" }
### Related Issue Fixes #7503 Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets. --- ### What does this PR do? This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `path` points to a dataset saved using `save_to_disk()`, and automatically redirects to `load_from_disk()`. #### 🐛 Before (unexpected metadata-only rows): ```python ds = load_dataset("/path/to/saved_dataset") # → returns rows with only internal metadata (_data_files, _fingerprint, etc.) ```` #### ✅ After (graceful fallback): ```python ds = load_dataset("/path/to/saved_dataset") # → logs a warning and internally switches to load_from_disk() ``` --- ### Why is this useful? * Prevents confusion when reloading local datasets saved via `save_to_disk()`. * Enables smoother compatibility with frameworks (e.g., TRL, `lighteval`) that rely on `load_dataset()` calls. * Fully backward-compatible — hub-based loading, custom builders, and streaming remain untouched.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7653/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7652/comments
https://api.github.com/repos/huggingface/datasets/issues/7652/events
https://github.com/huggingface/datasets/pull/7652
3,183,372,055
PR_kwDODunzps6cdCnv
7,652
Add columns support to JSON loader for selective key filtering
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
3
2025-06-27 16:18:42+00:00
2025-09-04 17:35:31+00:00
2025-09-04 17:35:31+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7652.diff", "html_url": "https://github.com/huggingface/datasets/pull/7652", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7652.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7652" }
Fixes #7594 This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet. As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary. ### Example: ```python from datasets import load_dataset dataset = load_dataset("json", data_files="your_data.jsonl", columns=["id", "title"]) print(dataset["train"].column_names) # Output: ['id', 'title'] ``` ### Summary of changes: * Added `columns: Optional[List[str]]` to `JsonConfig` * Updated `_generate_tables()` to filter selected columns * Forwarded `columns` argument from `load_dataset()` to the config * Added test for validation(should be fine!) Let me know if you'd like the same to be added for CSV or others as a follow-up — happy to help.
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7652/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7652/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7651/comments
https://api.github.com/repos/huggingface/datasets/issues/7651/events
https://github.com/huggingface/datasets/pull/7651
3,182,792,775
PR_kwDODunzps6cbMmg
7,651
fix: Extended metadata file names for folder_based_builder
{ "avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4", "events_url": "https://api.github.com/users/iPieter/events{/privacy}", "followers_url": "https://api.github.com/users/iPieter/followers", "following_url": "https://api.github.com/users/iPieter/following{/other_user}", "gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iPieter", "id": 6965756, "login": "iPieter", "node_id": "MDQ6VXNlcjY5NjU3NTY=", "organizations_url": "https://api.github.com/users/iPieter/orgs", "received_events_url": "https://api.github.com/users/iPieter/received_events", "repos_url": "https://api.github.com/users/iPieter/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iPieter/subscriptions", "type": "User", "url": "https://api.github.com/users/iPieter", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-27 13:12:11+00:00
2025-06-30 08:19:37+00:00
NaT
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7651.diff", "html_url": "https://github.com/huggingface/datasets/pull/7651", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7651.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7651" }
Fixes #7650. The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650. This PR adds these filenames to the builder, allowing correct loading.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7651/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7651/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7650/comments
https://api.github.com/repos/huggingface/datasets/issues/7650/events
https://github.com/huggingface/datasets/issues/7650
3,182,745,315
I_kwDODunzps69tNbj
7,650
`load_dataset` defaults to json file format for datasets with 1 shard
{ "avatar_url": "https://avatars.githubusercontent.com/u/6965756?v=4", "events_url": "https://api.github.com/users/iPieter/events{/privacy}", "followers_url": "https://api.github.com/users/iPieter/followers", "following_url": "https://api.github.com/users/iPieter/following{/other_user}", "gists_url": "https://api.github.com/users/iPieter/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iPieter", "id": 6965756, "login": "iPieter", "node_id": "MDQ6VXNlcjY5NjU3NTY=", "organizations_url": "https://api.github.com/users/iPieter/orgs", "received_events_url": "https://api.github.com/users/iPieter/received_events", "repos_url": "https://api.github.com/users/iPieter/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iPieter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iPieter/subscriptions", "type": "User", "url": "https://api.github.com/users/iPieter", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-27 12:54:25+00:00
2025-06-27 12:54:25+00:00
NaT
NONE
null
null
null
null
### Describe the bug I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for streaming and then load each dataset. I have no problem loading any of the other datasets with more than 1 arrow file/shard. The error indicates the training set got loaded in arrow format (correct) and the validation set in json (incorrect). This seems to be because some of the metadata files are considered as dataset files. ``` Error loading /nfs/dataset_pt-uk: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('validation'): ('json', {})} ``` ![Image](https://github.com/user-attachments/assets/f6e7596a-dd53-46a9-9a23-4e9cac2ac049) Concretely, there is a mismatch between the metadata created by the `DatasetDict.save_to_file` and the builder for `datasets.load_dataset`: https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/data_files.py#L107 The `folder_based_builder` lists all files and with 1 arrow file the json files (that are actually metadata) are in the majority. https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58 ### Steps to reproduce the bug Create a dataset with metadata and 1 arrow file in validation and multiple arrow files in the training set, following the above description. In my case, I saved the files via: ```python dataset = DatasetDict({ 'train': train_dataset, 'validation': val_dataset }) dataset.save_to_disk(output_path, max_shard_size="50MB") ``` ### Expected behavior The dataset would get loaded. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-6.14.0-22-generic-x86_64-with-glibc2.41 - Python version: 3.12.7 - `huggingface_hub` version: 0.31.1 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7650/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7650/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7649/comments
https://api.github.com/repos/huggingface/datasets/issues/7649/events
https://github.com/huggingface/datasets/pull/7649
3,181,481,444
PR_kwDODunzps6cW0sQ
7,649
Enable parallel shard upload in push_to_hub() using num_proc
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-06-27 05:59:03+00:00
2025-07-07 18:13:53+00:00
2025-07-07 18:13:52+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7649.diff", "html_url": "https://github.com/huggingface/datasets/pull/7649", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7649.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7649" }
Fixes #7591 ### Add num_proc support to `push_to_hub()` for parallel shard upload This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`. 📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_push_parquet_shards_to_hub()`, it was not being used to parallelize the upload. 🔧 This PR updates the internal `_push_parquet_shards_to_hub()` function to: - Use `multiprocessing.Pool` and `iflatmap_unordered()` for concurrent shard upload when `num_proc > 1` - Preserve original serial upload behavior if `num_proc` is `None` or ≤ 1 - Keep tqdm progress and commit behavior unchanged Let me know if any test coverage or further changes are needed!
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7649/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7649/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7648/comments
https://api.github.com/repos/huggingface/datasets/issues/7648/events
https://github.com/huggingface/datasets/pull/7648
3,181,409,736
PR_kwDODunzps6cWmSn
7,648
Fix misleading add_column() usage example in docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
8
2025-06-27 05:27:04+00:00
2025-07-28 19:42:34+00:00
2025-07-17 13:14:17+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7648.diff", "html_url": "https://github.com/huggingface/datasets/pull/7648", "merged_at": "2025-07-17T13:14:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/7648.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7648" }
Fixes #7611 This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place. Why: The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change. This should make the behavior clearer for users. @lhoestq @davanstrien
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7648/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7648/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7647/comments
https://api.github.com/repos/huggingface/datasets/issues/7647/events
https://github.com/huggingface/datasets/issues/7647
3,178,952,517
I_kwDODunzps69evdF
7,647
loading mozilla-foundation--common_voice_11_0 fails
{ "avatar_url": "https://avatars.githubusercontent.com/u/5703039?v=4", "events_url": "https://api.github.com/users/pavel-esir/events{/privacy}", "followers_url": "https://api.github.com/users/pavel-esir/followers", "following_url": "https://api.github.com/users/pavel-esir/following{/other_user}", "gists_url": "https://api.github.com/users/pavel-esir/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pavel-esir", "id": 5703039, "login": "pavel-esir", "node_id": "MDQ6VXNlcjU3MDMwMzk=", "organizations_url": "https://api.github.com/users/pavel-esir/orgs", "received_events_url": "https://api.github.com/users/pavel-esir/received_events", "repos_url": "https://api.github.com/users/pavel-esir/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pavel-esir/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pavel-esir/subscriptions", "type": "User", "url": "https://api.github.com/users/pavel-esir", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2025-06-26 12:23:48+00:00
2025-07-10 14:49:30+00:00
NaT
NONE
null
null
null
null
### Describe the bug Hello everyone, i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer ``` import datasets datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True) ``` and it fails with ``` File ~/opt/envs/.../lib/python3.10/site-packages/datasets/utils/file_utils.py:827, in _add_retries_to_file_obj_read_method.<locals>.read_with_retries(*args, **kwargs) 825 for retry in range(1, max_retries + 1): 826 try: --> 827 out = read(*args, **kwargs) 828 break 829 except ( 830 _AiohttpClientError, 831 asyncio.TimeoutError, 832 requests.exceptions.ConnectionError, 833 requests.exceptions.Timeout, 834 ) as err: File /usr/lib/python3.10/codecs.py:322, in BufferedIncrementalDecoder.decode(self, input, final) 319 def decode(self, input, final=False): 320 # decode input (taking the buffer into account) 321 data = self.buffer + input --> 322 (result, consumed) = self._buffer_decode(data, self.errors, final) 323 # keep undecoded input until the next call 324 self.buffer = data[consumed:] UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte ``` When i remove streaming then everything is good but i need `streaming=True` ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True) ``` ### Expected behavior Expected that it will download dataset ### Environment info datasets==3.6.0 python3.10 on all platforms linux/win/mac
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7647/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7647/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7646/comments
https://api.github.com/repos/huggingface/datasets/issues/7646/events
https://github.com/huggingface/datasets/pull/7646
3,178,036,854
PR_kwDODunzps6cLhrM
7,646
Introduces automatic subset-level grouping for folder-based dataset builders #7066
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
4
2025-06-26 07:01:37+00:00
2025-07-14 10:42:56+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7646.diff", "html_url": "https://github.com/huggingface/datasets/pull/7646", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7646.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7646" }
Fixes #7066 This PR introduces automatic **subset-level grouping** for folder-based dataset builders by: 1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes). 2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one split per subset. 3. Adding unit tests for the grouping function. 4. Updating the documentation to describe this new behavior under `docs/source/repository_structure.mdx`. --- ### Motivation Datasets with files like: ``` train0.jsonl train1.jsonl animals.jsonl metadata.jsonl ``` will now be **automatically grouped** as: - `"train"` subset → `train0.jsonl`, `train1.jsonl` - `"animals"` subset → `animals.jsonl` - `"metadata"` subset → `metadata.jsonl` This enables structured multi-subset loading even when the dataset doesn't follow traditional `train/validation/test` split conventions. --- ### Files Changed - `src/datasets/data_files.py`: added `group_files_by_subset()` utility - `src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py`: grouped files before yielding splits - `tests/test_data_files.py`: added unit test `test_group_files_by_subset` - `docs/source/repository_structure.mdx`: documented subset grouping for maintainers and users --- ### Benefits - More flexible and robust dataset split logic - Enables logical grouping of user-uploaded files without nested folder structure - Backward-compatible with all existing folder-based configs --- Ready for review ✅
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7646/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7646/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7645/comments
https://api.github.com/repos/huggingface/datasets/issues/7645/events
https://github.com/huggingface/datasets/pull/7645
3,176,810,164
PR_kwDODunzps6cHkp-
7,645
`ClassLabel` docs: Correct value for unknown labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/56924246?v=4", "events_url": "https://api.github.com/users/l-uuz/events{/privacy}", "followers_url": "https://api.github.com/users/l-uuz/followers", "following_url": "https://api.github.com/users/l-uuz/following{/other_user}", "gists_url": "https://api.github.com/users/l-uuz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/l-uuz", "id": 56924246, "login": "l-uuz", "node_id": "MDQ6VXNlcjU2OTI0MjQ2", "organizations_url": "https://api.github.com/users/l-uuz/orgs", "received_events_url": "https://api.github.com/users/l-uuz/received_events", "repos_url": "https://api.github.com/users/l-uuz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/l-uuz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/l-uuz/subscriptions", "type": "User", "url": "https://api.github.com/users/l-uuz", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-25 20:01:35+00:00
2025-06-25 20:01:35+00:00
NaT
NONE
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7645.diff", "html_url": "https://github.com/huggingface/datasets/pull/7645", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7645.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7645" }
This small change fixes the documentation to to be compliant with what happens in `encode_example`. https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7645/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7645/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7644/comments
https://api.github.com/repos/huggingface/datasets/issues/7644/events
https://github.com/huggingface/datasets/pull/7644
3,176,363,492
PR_kwDODunzps6cGGfW
7,644
fix sequence ci
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-25 17:07:55+00:00
2025-06-25 17:10:30+00:00
2025-06-25 17:08:01+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7644.diff", "html_url": "https://github.com/huggingface/datasets/pull/7644", "merged_at": "2025-06-25T17:08:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/7644.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7644" }
fix error from https://github.com/huggingface/datasets/pull/7643
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7644/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7644/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7643/comments
https://api.github.com/repos/huggingface/datasets/issues/7643/events
https://github.com/huggingface/datasets/pull/7643
3,176,354,431
PR_kwDODunzps6cGEeK
7,643
Backward compat sequence instance
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-25 17:05:09+00:00
2025-06-25 17:07:40+00:00
2025-06-25 17:05:44+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7643.diff", "html_url": "https://github.com/huggingface/datasets/pull/7643", "merged_at": "2025-06-25T17:05:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/7643.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7643" }
useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7643/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7643/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7642/comments
https://api.github.com/repos/huggingface/datasets/issues/7642/events
https://github.com/huggingface/datasets/pull/7642
3,176,025,890
PR_kwDODunzps6cE_Wr
7,642
fix length for ci
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2025-06-25 15:10:38+00:00
2025-06-25 15:11:53+00:00
2025-06-25 15:11:51+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7642.diff", "html_url": "https://github.com/huggingface/datasets/pull/7642", "merged_at": "2025-06-25T15:11:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/7642.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7642" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7642/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7642/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7641/comments
https://api.github.com/repos/huggingface/datasets/issues/7641/events
https://github.com/huggingface/datasets/pull/7641
3,175,953,405
PR_kwDODunzps6cEwUl
7,641
update docs and docstrings
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-25 14:48:58+00:00
2025-06-25 14:51:46+00:00
2025-06-25 14:49:33+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7641.diff", "html_url": "https://github.com/huggingface/datasets/pull/7641", "merged_at": "2025-06-25T14:49:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7641.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7641" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7641/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7641/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7640/comments
https://api.github.com/repos/huggingface/datasets/issues/7640/events
https://github.com/huggingface/datasets/pull/7640
3,175,914,924
PR_kwDODunzps6cEofU
7,640
better features repr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-25 14:37:32+00:00
2025-06-25 14:46:47+00:00
2025-06-25 14:46:45+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7640.diff", "html_url": "https://github.com/huggingface/datasets/pull/7640", "merged_at": "2025-06-25T14:46:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/7640.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7640" }
following the addition of List in #7634 before: ```python In [3]: ds.features Out[3]: {'json': {'id': Value(dtype='string', id=None), 'metadata:transcript': [{'end': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None), 'transcript': Value(dtype='string', id=None), 'words': [{'end': Value(dtype='float64', id=None), 'score': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None), 'word': Value(dtype='string', id=None)}]}], 'metadata:vad': [{'end': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None)}]}, 'mp4': Value(dtype='binary', id=None), 'npz': {'boxes_and_keypoints:box': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'boxes_and_keypoints:is_valid_box': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'boxes_and_keypoints:keypoints': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'movement:EmotionArousalToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:EmotionValenceToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:FAUToken': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:FAUValue': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:alignment_head_rotation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:alignment_translation': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'movement:emotion_arousal': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:emotion_scores': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:emotion_valence': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:expression': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:frame_latent': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:gaze_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:head_encodings': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:hypernet_features': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'movement:is_valid': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'smplh:body_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'smplh:global_orient': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), 'smplh:is_valid': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'smplh:left_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'smplh:right_hand_pose': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None), 'smplh:translation': Sequence(feature=Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), length=-1, id=None)}, 'wav': Audio(sampling_rate=None, mono=True, decode=True, id=None), '__key__': Value(dtype='string', id=None), '__url__': Value(dtype='string', id=None)} ``` after: ```python In [3]: ds.features Out[3]: {'json': {'id': Value('string'), 'metadata:transcript': List({'end': Value('float64'), 'start': Value('float64'), 'transcript': Value('string'), 'words': List({'end': Value('float64'), 'score': Value('float64'), 'start': Value('float64'), 'word': Value('string')})}), 'metadata:vad': List({'end': Value('float64'), 'start': Value('float64')})}, 'mp4': Value('binary'), 'npz': {'boxes_and_keypoints:box': List(List(Value('float32'))), 'boxes_and_keypoints:is_valid_box': List(Value('bool')), 'boxes_and_keypoints:keypoints': List(List(List(Value('float32')))), 'movement:EmotionArousalToken': List(List(Value('float32'))), 'movement:EmotionValenceToken': List(List(Value('float32'))), 'movement:FAUToken': List(List(Value('float32'))), 'movement:FAUValue': List(List(Value('float32'))), 'movement:alignment_head_rotation': List(List(Value('float32'))), 'movement:alignment_translation': List(List(List(Value('float32')))), 'movement:emotion_arousal': List(List(Value('float32'))), 'movement:emotion_scores': List(List(Value('float32'))), 'movement:emotion_valence': List(List(Value('float32'))), 'movement:expression': List(List(Value('float32'))), 'movement:frame_latent': List(List(Value('float32'))), 'movement:gaze_encodings': List(List(Value('float32'))), 'movement:head_encodings': List(List(Value('float32'))), 'movement:hypernet_features': List(List(Value('float32'))), 'movement:is_valid': List(List(Value('float32'))), 'smplh:body_pose': List(List(List(Value('float32')))), 'smplh:global_orient': List(List(Value('float32'))), 'smplh:is_valid': List(Value('bool')), 'smplh:left_hand_pose': List(List(List(Value('float32')))), 'smplh:right_hand_pose': List(List(List(Value('float32')))), 'smplh:translation': List(List(Value('float32')))}, 'wav': Audio(sampling_rate=None, decode=True, stream_index=None), '__key__': Value('string'), '__url__': Value('string')} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7640/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7640/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7639/comments
https://api.github.com/repos/huggingface/datasets/issues/7639/events
https://github.com/huggingface/datasets/pull/7639
3,175,616,169
PR_kwDODunzps6cDoAf
7,639
fix save_infos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-25 13:16:26+00:00
2025-06-25 13:19:33+00:00
2025-06-25 13:16:33+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7639.diff", "html_url": "https://github.com/huggingface/datasets/pull/7639", "merged_at": "2025-06-25T13:16:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/7639.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7639" }
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7639/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7638/comments
https://api.github.com/repos/huggingface/datasets/issues/7638/events
https://github.com/huggingface/datasets/pull/7638
3,172,645,391
PR_kwDODunzps6b5vpZ
7,638
Add ignore_decode_errors option to Image feature for robust decoding #7612
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
4
2025-06-24 16:47:51+00:00
2025-07-04 07:07:30+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7638.diff", "html_url": "https://github.com/huggingface/datasets/pull/7638", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7638.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7638" }
This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612. ## 🔧 What was added - A new boolean field: `ignore_decode_errors` (default: `False`) - If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error ```python features = Features({ "image": Image(decode=True, ignore_decode_errors=True), }) ```` This enables robust iteration over potentially corrupted datasets — especially useful when streaming datasets like WebDataset or image-heavy public sets where sample corruption is common. ## 🧪 Behavior * If `ignore_decode_errors=False` (default), decoding behaves exactly as before * If `True`, decoding errors are caught, and a warning is emitted: ``` [Image.decode_example] Skipped corrupted image: ... ``` ## 🧵 Linked issue Closes #7612 Let me know if you'd like a follow-up test PR. Happy to write one!
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7638/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7638/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7637/comments
https://api.github.com/repos/huggingface/datasets/issues/7637/events
https://github.com/huggingface/datasets/issues/7637
3,171,883,522
I_kwDODunzps69DxoC
7,637
Introduce subset_name as an alias of config_name
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
4
2025-06-24 12:49:01+00:00
2025-07-01 16:08:33+00:00
NaT
MEMBER
null
null
null
null
### Feature request Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata). ### Motivation The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology. I have repeatedly received questions from users trying to understand what "config" means, and why it doesn’t match what they see as "subset" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility. This change would: - Align terminology across the Hub UI and datasets codebase - Reduce user confusion, especially for newcomers - Make documentation and examples more intuitive
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/7637/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7637/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7636/comments
https://api.github.com/repos/huggingface/datasets/issues/7636/events
https://github.com/huggingface/datasets/issues/7636
3,170,878,167
I_kwDODunzps68_8LX
7,636
"open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable"
{ "avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4", "events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}", "followers_url": "https://api.github.com/users/kuanyan9527/followers", "following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}", "gists_url": "https://api.github.com/users/kuanyan9527/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kuanyan9527", "id": 51187979, "login": "kuanyan9527", "node_id": "MDQ6VXNlcjUxMTg3OTc5", "organizations_url": "https://api.github.com/users/kuanyan9527/orgs", "received_events_url": "https://api.github.com/users/kuanyan9527/received_events", "repos_url": "https://api.github.com/users/kuanyan9527/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kuanyan9527/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kuanyan9527/subscriptions", "type": "User", "url": "https://api.github.com/users/kuanyan9527", "user_view_type": "public" }
[]
open
false
null
[]
null
4
2025-06-24 08:09:39+00:00
2025-07-10 04:13:16+00:00
NaT
NONE
null
null
null
null
When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable" ```python print("open" in globals()["__builtins__"]) ``` Traceback (most recent call last): File "./main.py", line 2, in <module> print("open" in globals()["__builtins__"]) ^^^^^^^^^^^^^^^^^^^^^^ TypeError: argument of type 'module' is not iterable But this code runs fine in datasets, I don't understand why [src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96)
{ "avatar_url": "https://avatars.githubusercontent.com/u/51187979?v=4", "events_url": "https://api.github.com/users/kuanyan9527/events{/privacy}", "followers_url": "https://api.github.com/users/kuanyan9527/followers", "following_url": "https://api.github.com/users/kuanyan9527/following{/other_user}", "gists_url": "https://api.github.com/users/kuanyan9527/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kuanyan9527", "id": 51187979, "login": "kuanyan9527", "node_id": "MDQ6VXNlcjUxMTg3OTc5", "organizations_url": "https://api.github.com/users/kuanyan9527/orgs", "received_events_url": "https://api.github.com/users/kuanyan9527/received_events", "repos_url": "https://api.github.com/users/kuanyan9527/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kuanyan9527/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kuanyan9527/subscriptions", "type": "User", "url": "https://api.github.com/users/kuanyan9527", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7636/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7636/timeline
null
reopened
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7635/comments
https://api.github.com/repos/huggingface/datasets/issues/7635/events
https://github.com/huggingface/datasets/pull/7635
3,170,486,408
PR_kwDODunzps6bybOe
7,635
Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0)
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-24 06:16:48+00:00
2025-06-24 06:16:48+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7635.diff", "html_url": "https://github.com/huggingface/datasets/pull/7635", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7635.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7635" }
This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference. This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` instead of `"float"`. ### 🔍 What was happening: When the JSON loader falls back to `pandas_read_json()` (after `pa.read_json()` fails), pandas/Arrow can coerce float values to integers if all values are integer-like (e.g., `0.0 == 0`). ### ✅ What this PR does: - Adds a check in the fallback path of `_generate_tables()` - Ensures that columns made entirely of floats are preserved as `"float64"` even if they are integer-like (e.g. `0.0`, `1.0`) - This prevents loss of float semantics when creating the Arrow table ### 🧪 Reproducible Example: ```json [{"col": 0.0}, {"col": 1.0}, {"col": 2.0}] ```` Previously loaded as: * `int` Now correctly loaded as: * `float` Fixes #6937
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7635/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7635/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7634/comments
https://api.github.com/repos/huggingface/datasets/issues/7634/events
https://github.com/huggingface/datasets/pull/7634
3,169,389,653
PR_kwDODunzps6buyij
7,634
Replace Sequence by List
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-23 20:35:48+00:00
2025-06-25 13:59:13+00:00
2025-06-25 13:59:11+00:00
MEMBER
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7634.diff", "html_url": "https://github.com/huggingface/datasets/pull/7634", "merged_at": "2025-06-25T13:59:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/7634.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7634" }
Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list. This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead. before: `Sequence(Value("int64"))` or `[Value("int64")]` now: `List(Value("int64"))` This PR conserves full backward compatibility. And it's a good occasion with the release of 4.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7634/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7634/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7633/comments
https://api.github.com/repos/huggingface/datasets/issues/7633/events
https://github.com/huggingface/datasets/issues/7633
3,168,399,637
I_kwDODunzps682fEV
7,633
Proposal: Small Tamil Discourse Coherence Dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/66418501?v=4", "events_url": "https://api.github.com/users/bikkiNitSrinagar/events{/privacy}", "followers_url": "https://api.github.com/users/bikkiNitSrinagar/followers", "following_url": "https://api.github.com/users/bikkiNitSrinagar/following{/other_user}", "gists_url": "https://api.github.com/users/bikkiNitSrinagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bikkiNitSrinagar", "id": 66418501, "login": "bikkiNitSrinagar", "node_id": "MDQ6VXNlcjY2NDE4NTAx", "organizations_url": "https://api.github.com/users/bikkiNitSrinagar/orgs", "received_events_url": "https://api.github.com/users/bikkiNitSrinagar/received_events", "repos_url": "https://api.github.com/users/bikkiNitSrinagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bikkiNitSrinagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bikkiNitSrinagar/subscriptions", "type": "User", "url": "https://api.github.com/users/bikkiNitSrinagar", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-23 14:24:40+00:00
2025-06-23 14:24:40+00:00
NaT
NONE
null
null
null
null
I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages. - Size: 50 samples - Format: CSV with columns (text1, text2, label) - Use case: Training NLP models for coherence I’ll use GitHub’s web editor and Google Colab. Please confirm if this fits.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7633/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7632/comments
https://api.github.com/repos/huggingface/datasets/issues/7632/events
https://github.com/huggingface/datasets/issues/7632
3,168,283,589
I_kwDODunzps682CvF
7,632
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/37377515?v=4", "events_url": "https://api.github.com/users/ganiket19/events{/privacy}", "followers_url": "https://api.github.com/users/ganiket19/followers", "following_url": "https://api.github.com/users/ganiket19/following{/other_user}", "gists_url": "https://api.github.com/users/ganiket19/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ganiket19", "id": 37377515, "login": "ganiket19", "node_id": "MDQ6VXNlcjM3Mzc3NTE1", "organizations_url": "https://api.github.com/users/ganiket19/orgs", "received_events_url": "https://api.github.com/users/ganiket19/received_events", "repos_url": "https://api.github.com/users/ganiket19/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ganiket19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ganiket19/subscriptions", "type": "User", "url": "https://api.github.com/users/ganiket19", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
2
2025-06-23 13:49:24+00:00
2025-07-08 06:52:53+00:00
NaT
NONE
null
null
null
null
### Feature request Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples are common. reference : https://discuss.huggingface.co/t/handle-errors-when-loading-images-404-corrupted-etc/50318/5 https://discuss.huggingface.co/t/handling-non-existing-url-in-image-dataset-while-cast-column/69185 Proposed Feature Introduce a mechanism (e.g., a continue_on_error=True flag or global error handling mode) in Image(decode=True) that: Skips invalid images and sets them as None, or Logs the error but allows the rest of the dataset to be processed without interruption. Example Usage from datasets import load_dataset, Image dataset = load_dataset("my_dataset") dataset = dataset.cast_column("image", Image(decode=True, continue_on_error=True)) Benefits Ensures robust large-scale image dataset processing. Improves developer productivity by avoiding custom retry/error-handling code. Aligns with best practices in dataset preprocessing pipelines that tolerate minor data corruption. Potential Implementation Options Internally wrap the decoding in a try/except block. Return None or a placeholder on failure. Optionally allow custom error callbacks or logging. ### Motivation Robustness: Large-scale image datasets often contain a small fraction of corrupt files or unreachable URLs. Halting on the first error forces users to write custom workarounds or preprocess externally. Simplicity: A built-in flag removes boilerplate try/except logic around every decode step. Performance: Skipping invalid samples inline is more efficient than a two-pass approach (filter then decode). ### Your contribution 1. API Change Extend datasets.features.Image(decode=True) to accept continue_on_error: bool = False. 2. Behavior If continue_on_error=False (default), maintain current behavior: any decode error raises an exception. If continue_on_error=True, wrap decode logic in try/except: On success: store the decoded image. On failure: log a warning (e.g., via logging.warning) and set the field to None (or a sentinel value). 3. Optional Enhancements Allow a callback hook: Image(decode=True, continue_on_error=True, on_error=lambda idx, url, exc: ...) Emit metrics or counts of skipped images.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7632/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7632/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7631/comments
https://api.github.com/repos/huggingface/datasets/issues/7631/events
https://github.com/huggingface/datasets/pull/7631
3,165,127,657
PR_kwDODunzps6bgwOB
7,631
Pass user-agent from DownloadConfig into fsspec storage_options
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
1
2025-06-21 14:22:25+00:00
2025-06-21 14:25:28+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7631.diff", "html_url": "https://github.com/huggingface/datasets/pull/7631", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7631" }
Fixes part of issue #6046 ### Problem The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests. ### Solution Added support for injecting the `user-agent` into `storage_options["headers"]` within `_prepare_single_hop_path_and_storage_options()` based on the `protocol`. Now, when using `hf://`, `http://`, or `https://`, the custom user-agent is passed automatically. ### Code Location Modified: - `src/datasets/utils/file_utils.py` Used `get_datasets_user_agent(...)` to ensure proper formatting and fallback logic.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7631/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7630/comments
https://api.github.com/repos/huggingface/datasets/issues/7630/events
https://github.com/huggingface/datasets/issues/7630
3,164,650,900
I_kwDODunzps68oL2U
7,630
[bug] resume from ckpt skips samples if .map is applied
{ "avatar_url": "https://avatars.githubusercontent.com/u/23004953?v=4", "events_url": "https://api.github.com/users/felipemello1/events{/privacy}", "followers_url": "https://api.github.com/users/felipemello1/followers", "following_url": "https://api.github.com/users/felipemello1/following{/other_user}", "gists_url": "https://api.github.com/users/felipemello1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/felipemello1", "id": 23004953, "login": "felipemello1", "node_id": "MDQ6VXNlcjIzMDA0OTUz", "organizations_url": "https://api.github.com/users/felipemello1/orgs", "received_events_url": "https://api.github.com/users/felipemello1/received_events", "repos_url": "https://api.github.com/users/felipemello1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/felipemello1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felipemello1/subscriptions", "type": "User", "url": "https://api.github.com/users/felipemello1", "user_view_type": "public" }
[]
open
false
null
[]
null
2
2025-06-21 01:50:03+00:00
2025-06-29 07:51:32+00:00
NaT
NONE
null
null
null
null
### Describe the bug resume from ckpt skips samples if .map is applied Maybe related: https://github.com/huggingface/datasets/issues/7538 ### Steps to reproduce the bug ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node # Create dataset with map transformation def create_dataset(): ds = Dataset.from_dict({"id": list(range(100))}) ds = ds.to_iterable_dataset(num_shards=4) ds = ds.map(lambda x: x) #comment it out to get desired behavior ds = split_dataset_by_node(ds, rank=0, world_size=2) return ds ds = create_dataset() # Iterate and save checkpoint after 10 samples it = iter(ds) for idx, sample in enumerate(it): if idx == 9: # Checkpoint after 10 samples checkpoint = ds.state_dict() print(f"Checkpoint saved at sample: {sample['id']}") break # Continue with original iterator original_next_samples = [] for idx, sample in enumerate(it): original_next_samples.append(sample["id"]) if idx >= 4: break # Resume from checkpoint ds_new = create_dataset() ds_new.load_state_dict(checkpoint) # Get samples from resumed iterator it_new = iter(ds_new) resumed_next_samples = [] for idx, sample in enumerate(it_new): resumed_next_samples.append(sample["id"]) if idx >= 4: break print(f"\nExpected next samples: {original_next_samples}") print(f"Actual next samples: {resumed_next_samples}") print( f"\n❌ BUG: {resumed_next_samples[0] - original_next_samples[0]} samples were skipped!" ) ``` With map ``` Checkpoint saved at sample: 9 Expected next samples: [10, 11, 12, 13, 14] Actual next samples: [50, 51, 52, 53, 54] ❌ BUG: 40 samples were skipped! ``` ### Expected behavior without map ``` Expected next samples: [10, 11, 12, 13, 14] Actual next samples: [10, 11, 12, 13, 14] ❌ BUG: 0 samples were skipped! ``` ### Environment info datasets == 3.6.0
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7630/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7630/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7629/comments
https://api.github.com/repos/huggingface/datasets/issues/7629/events
https://github.com/huggingface/datasets/pull/7629
3,161,169,782
PR_kwDODunzps6bTc7b
7,629
Add test for `as_iterable_dataset()` method in DatasetBuilder
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-19 19:23:55+00:00
2025-06-19 19:23:55+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7629.diff", "html_url": "https://github.com/huggingface/datasets/pull/7629", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7629.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7629" }
This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628. The test: - Loads a builder using `load_dataset_builder("c4", "en")` - Runs `download_and_prepare()` - Streams examples using `builder.as_iterable_dataset(split="train[:100]")` - Verifies streamed examples contain the "text" field This ensures that the builder correctly streams data from cached Arrow files.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7629/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7628/comments
https://api.github.com/repos/huggingface/datasets/issues/7628/events
https://github.com/huggingface/datasets/pull/7628
3,161,156,461
PR_kwDODunzps6bTaGk
7,628
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
0
2025-06-19 19:15:41+00:00
2025-06-19 19:15:41+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7628.diff", "html_url": "https://github.com/huggingface/datasets/pull/7628", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7628" }
This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481. It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory. This is useful for large-scale training scenarios where memory is constrained. A test has also been added in `test_builder.py`. Related to: #5481
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7628/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7628/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7627/comments
https://api.github.com/repos/huggingface/datasets/issues/7627/events
https://github.com/huggingface/datasets/issues/7627
3,160,544,390
I_kwDODunzps68YhSG
7,627
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
{ "avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4", "events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}", "followers_url": "https://api.github.com/users/Thunderhead-exe/followers", "following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}", "gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Thunderhead-exe", "id": 118734142, "login": "Thunderhead-exe", "node_id": "U_kgDOBxO9Pg", "organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs", "received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events", "repos_url": "https://api.github.com/users/Thunderhead-exe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions", "type": "User", "url": "https://api.github.com/users/Thunderhead-exe", "user_view_type": "public" }
[]
closed
false
null
[]
null
1
2025-06-19 14:28:41+00:00
2025-06-23 12:39:10+00:00
2025-06-23 12:39:10+00:00
NONE
null
null
null
null
Hi, I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_ Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot! From what I understand, it is loading the images into cache then building the dataset. – Please find bellow the execution screenshot – Is there a way to optimize this or am I doing something wrong? Thanks! ![Image](https://github.com/user-attachments/assets/c79257c8-f023-42a9-9e6f-0898b3ea93fe)
{ "avatar_url": "https://avatars.githubusercontent.com/u/118734142?v=4", "events_url": "https://api.github.com/users/Thunderhead-exe/events{/privacy}", "followers_url": "https://api.github.com/users/Thunderhead-exe/followers", "following_url": "https://api.github.com/users/Thunderhead-exe/following{/other_user}", "gists_url": "https://api.github.com/users/Thunderhead-exe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Thunderhead-exe", "id": 118734142, "login": "Thunderhead-exe", "node_id": "U_kgDOBxO9Pg", "organizations_url": "https://api.github.com/users/Thunderhead-exe/orgs", "received_events_url": "https://api.github.com/users/Thunderhead-exe/received_events", "repos_url": "https://api.github.com/users/Thunderhead-exe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Thunderhead-exe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Thunderhead-exe/subscriptions", "type": "User", "url": "https://api.github.com/users/Thunderhead-exe", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7627/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7627/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7626/comments
https://api.github.com/repos/huggingface/datasets/issues/7626/events
https://github.com/huggingface/datasets/pull/7626
3,159,322,138
PR_kwDODunzps6bNMuF
7,626
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
0
2025-06-19 07:41:45+00:00
2025-07-28 17:39:12+00:00
2025-07-28 17:39:12+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7626.diff", "html_url": "https://github.com/huggingface/datasets/pull/7626", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7626.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7626" }
## Summary This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified. ## What’s Implemented - Injected logic at the end of `Dataset.map()` to: - Identify untouched columns not in `input_columns` or `remove_columns` - Select those columns from the original dataset - Concatenate them with the transformed result using `pyarrow.concat_tables` ## Example Behavior ```python ds = Dataset.from_dict({"a": [1, 2], "b": [3, 4]}) ds2 = ds.map(lambda x: {"c": x["a"] + 10}, input_columns=["a"], remove_columns=["a"]) print(ds2.column_names) # Output: ['b', 'c'] ```` Column `b` is reused from the original dataset. ## Notes * This keeps disk usage and caching minimal by avoiding full dataset duplication. * Only triggered when `input_columns` is set. --- cc @lhoestq @mariosasko for review 🙂
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7626/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7625/comments
https://api.github.com/repos/huggingface/datasets/issues/7625/events
https://github.com/huggingface/datasets/pull/7625
3,159,016,001
PR_kwDODunzps6bMKof
7,625
feat: Add h5folder dataset loader for HDF5 support
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
open
false
null
[]
null
3
2025-06-19 05:39:10+00:00
2025-06-26 05:44:26+00:00
NaT
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7625.diff", "html_url": "https://github.com/huggingface/datasets/pull/7625", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/7625.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7625" }
### Related Issue Closes #3113 ### What does this PR do? This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format. It allows users to do: ```python from datasets import load_dataset dataset = load_dataset("h5folder", data_dir="path/to/") ```` ### 🧩 Design Overview * Implemented inside `datasets/packaged_modules/h5folder/h5folder.py` * Based on the `GeneratorBasedBuilder` API * Uses `h5py` to read HDF5 files and yield examples * Expects datasets such as `id`, `data`, and `label` inside `data.h5` * Converts numpy arrays to Python types before yielding ### 🧪 Example `.h5` Structure (for local testing) ```python import h5py import numpy as np with h5py.File("data.h5", "w") as f: f.create_dataset("id", data=np.arange(100)) f.create_dataset("data", data=np.random.randn(100, 10)) f.create_dataset("label", data=np.random.randint(0, 2, size=100)) ``` ### ✅ Testing - The loader logic follows the structure of existing modules like `imagefolder` - Will rely on Hugging Face CI to validate integration - Manually testing planned once merged or during feedback ### 📁 Files Added * `datasets/src/datasets/packaged_modules/h5folder/h5folder.py` ### 📌 Component(s) Affected * `area/datasets` * `area/load` ### 📦 Release Note Classification * `rn/feature` – Adds support for loading `.h5` datasets via `load_dataset("h5folder", ...)` --- Let me know if any changes or improvements are needed — happy to iterate. Thanks for reviewing!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7625/timeline
null
null
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/7624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7624/comments
https://api.github.com/repos/huggingface/datasets/issues/7624/events
https://github.com/huggingface/datasets/issues/7624
3,156,136,624
I_kwDODunzps68HtKw
7,624
#Dataset Make "image" column appear first in dataset preview UI
{ "avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4", "events_url": "https://api.github.com/users/jcerveto/events{/privacy}", "followers_url": "https://api.github.com/users/jcerveto/followers", "following_url": "https://api.github.com/users/jcerveto/following{/other_user}", "gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jcerveto", "id": 98875217, "login": "jcerveto", "node_id": "U_kgDOBeS3UQ", "organizations_url": "https://api.github.com/users/jcerveto/orgs", "received_events_url": "https://api.github.com/users/jcerveto/received_events", "repos_url": "https://api.github.com/users/jcerveto/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions", "type": "User", "url": "https://api.github.com/users/jcerveto", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-06-18 09:25:19+00:00
2025-06-20 07:46:43+00:00
2025-06-20 07:46:43+00:00
NONE
null
null
null
null
Hi! #Dataset I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub. However, at the moment, the `"image"` column is not the first—in fact, it appears last, which is not ideal for the presentation I’d like to achieve. I have a couple of questions: Is there a way to force the dataset card to display the `"image"` column first? Is there currently any way to control or influence the column order in the dataset preview UI? Does the order of keys in the .jsonl file or the features argument affect the display order? Thanks again for your time and help! :blush:
{ "avatar_url": "https://avatars.githubusercontent.com/u/98875217?v=4", "events_url": "https://api.github.com/users/jcerveto/events{/privacy}", "followers_url": "https://api.github.com/users/jcerveto/followers", "following_url": "https://api.github.com/users/jcerveto/following{/other_user}", "gists_url": "https://api.github.com/users/jcerveto/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jcerveto", "id": 98875217, "login": "jcerveto", "node_id": "U_kgDOBeS3UQ", "organizations_url": "https://api.github.com/users/jcerveto/orgs", "received_events_url": "https://api.github.com/users/jcerveto/received_events", "repos_url": "https://api.github.com/users/jcerveto/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jcerveto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcerveto/subscriptions", "type": "User", "url": "https://api.github.com/users/jcerveto", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7624/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/7623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7623/comments
https://api.github.com/repos/huggingface/datasets/issues/7623/events
https://github.com/huggingface/datasets/pull/7623
3,154,519,684
PR_kwDODunzps6a9Jk5
7,623
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4", "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}", "followers_url": "https://api.github.com/users/ArjunJagdale/followers", "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}", "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArjunJagdale", "id": 142811259, "login": "ArjunJagdale", "node_id": "U_kgDOCIMgew", "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs", "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events", "repos_url": "https://api.github.com/users/ArjunJagdale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions", "type": "User", "url": "https://api.github.com/users/ArjunJagdale", "user_view_type": "public" }
[]
closed
false
null
[]
null
2
2025-06-17 19:16:34+00:00
2025-06-18 14:18:41+00:00
2025-06-18 14:18:41+00:00
CONTRIBUTOR
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/7623.diff", "html_url": "https://github.com/huggingface/datasets/pull/7623", "merged_at": "2025-06-18T14:18:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/7623.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/7623" }
### Related Issues/PRs Fixes #6152 --- ### What changes are proposed in this pull request? This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofolder`, `imagefolder`, etc.). --- ### Why this change? Previously, when calling: ```python load_dataset("audiofolder") ```` without specifying `data_dir` or `data_files`, the loader would silently fallback to the **current working directory**, leading to: * Long loading times * Unexpected behavior (e.g., scanning unrelated files) This behavior was discussed in issue #6152. As suggested by maintainers, the fix has now been implemented directly inside the `FolderBasedBuilder._info()` method — keeping the logic localized to the specific builder instead of a generic loader function. --- ### How is this PR tested? * ✅ Manually tested by calling `load_dataset("audiofolder")` with no `data_dir` or `data_files` → a `ValueError` is now raised early. * ✅ Existing functionality (with valid input) remains unaffected. --- ### Does this PR require documentation update? * [x] No --- ### Release Notes #### Is this a user-facing change? * [x] Yes > Folder-based datasets now raise an explicit error if neither `data_dir` nor `data_files` are specified, preventing unintended fallback to the current working directory. --- #### What component(s) does this PR affect? * [x] `area/datasets` * [x] `area/load` --- <a name="release-note-category"></a> #### How should the PR be classified? * [x] `rn/bug-fix` - A user-facing bug fix --- #### Should this be included in the next patch release? * [x] Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7623/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7623/timeline
null
null
null
null
true