id
int64
599M
3.18B
number
int64
1
7.65k
title
stringlengths
1
290
state
stringclasses
2 values
body
stringlengths
0
228k
is_pull_request
bool
1 class
created_at
stringdate
2020-04-14 10:18:02
2025-06-26 12:23:48
updated_at
stringdate
2020-04-27 16:04:17
2025-06-26 14:02:38
closed_at
stringlengths
20
20
user_login
stringlengths
3
26
author_association
stringclasses
4 values
pr_url
stringlengths
46
49
pr_merged_at
stringlengths
20
20
comments_count
int64
0
70
reactions_total
int64
0
61
reactions_plus1
int64
0
39
reactions_heart
int64
0
22
draft
bool
2 classes
locked
bool
1 class
labels
listlengths
0
4
html_url
stringlengths
46
51
is_pr_url
bool
2 classes
comments
listlengths
0
30
2,905,543,017
7,442
Flexible Loader
open
### Feature request Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset? It can be something as simple as this one: ``` def load_hf_dataset(path_or_name): if os.path.exists(path_or_name): return load_from_disk(path_or_name) ...
true
2025-03-09T16:55:03Z
2025-03-27T23:58:17Z
null
dipta007
NONE
null
null
3
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7442
false
[ "Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?", "> Ideally `save_to_disk` should save in a format compatible with load_dataset, wdyt ?\n\nThat would be perfect if not at least a flexible loader.", "@lhoestq For now, you can use this small utility library: [nanoml](https://...
2,904,702,329
7,441
`drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker
open
### Describe the bug See the script below `drop_last_batch=True` is defined using map() for each dataset. The last batch for each dataset is expected to be dropped, id 21-25. The code behaves as expected when num_workers=0 or 1. When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 a...
true
2025-03-08T10:28:44Z
2025-03-09T21:27:33Z
null
memray
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7441
false
[ "Hi @memray, I’d like to help fix the issue with `drop_last_batch` not working when `num_workers > 1`. I’ll investigate and propose a solution. Thanks!\n", "Thank you very much for offering to help! I also noticed a problem related to a previous issue and left a comment [here](https://github.com/huggingface/datas...
2,903,740,662
7,440
IterableDataset raises FileNotFoundError instead of retrying
open
### Describe the bug In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*). I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can ...
true
2025-03-07T19:14:18Z
2025-04-17T23:40:35Z
null
bauwenst
NONE
null
null
6
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7440
false
[ "I have since been training more models with identical architectures over the same dataset, and it is completely unstable. One has now failed at chunk9/1215, whilst others have gotten past that.\n```python\nFileNotFoundError: zstd://example_train_1215.jsonl::hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d55119...
2,900,143,289
7,439
Fix multi gpu process example
closed
to is not an inplace function. But i am not sure about this code anyway, i think this is modifying the global variable `model` everytime the function is called? Which is on every batch? So it is juggling the same model on every gpu right? Isnt that very inefficient?
true
2025-03-06T11:29:19Z
2025-03-06T17:07:28Z
2025-03-06T17:06:38Z
SwayStar123
NONE
https://github.com/huggingface/datasets/pull/7439
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7439
true
[ "Okay nevermind looks like to works both ways for models. but my doubt still remains, isnt this changing the device of the model every batch?" ]
2,899,209,484
7,438
Allow dataset row indexing with np.int types (#7423)
open
@lhoestq Proposed fix for #7423. Added a couple simple tests as requested. I had some test failures related to Java and pyspark even when installing with dev but these don't seem to be related to the changes here and fail for me even on clean main. The typeerror raised when using the wrong type is: "Wrong key type...
true
2025-03-06T03:10:43Z
2025-03-06T03:10:43Z
null
DavidRConnell
NONE
https://github.com/huggingface/datasets/pull/7438
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7438
true
[]
2,899,104,679
7,437
Use pyupgrade --py39-plus for remaining files
open
This work follows #7428. And "requires-python" is set in pyproject.toml
true
2025-03-06T02:12:25Z
2025-04-15T14:47:54Z
null
cyyever
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7437
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7437
true
[]
2,898,385,725
7,436
chore: fix typos
closed
true
2025-03-05T20:17:54Z
2025-04-28T14:00:09Z
2025-04-28T13:51:26Z
afuetterer
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7436
2025-04-28T13:51:26Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7436
true
[]
2,895,536,956
7,435
Refactor `string_to_dict` to return `None` if there is no match instead of raising `ValueError`
closed
Making this change, as encouraged here: * https://github.com/huggingface/datasets/pull/7434#discussion_r1979933054 instead of having the pattern of using `try`-`except` to handle when there is no match, we can instead check if the return value is `None`; we can also assert that the return value should not be `Non...
true
2025-03-04T22:01:20Z
2025-03-12T16:52:00Z
2025-03-12T16:52:00Z
ringohoffman
NONE
https://github.com/huggingface/datasets/pull/7435
2025-03-12T16:51:59Z
8
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7435
true
[ "cc: @lhoestq ", "I am going to rebase #7434 onto this branch. Then we can merge this one first if you approve, and then #7434.", "@lhoestq any thoughts here?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7435). All of your documentation changes will be reflected on ...
2,893,075,908
7,434
Refactor `Dataset.map` to reuse cache files mapped with different `num_proc`
closed
Fixes #7433 This refactor unifies `num_proc is None or num_proc == 1` and `num_proc > 1`; instead of handling them completely separately where one uses a list of kwargs and shards and the other just uses a single set of kwargs and `self`, by wrapping the `num_proc == 1` case in a list and making the difference just ...
true
2025-03-04T06:12:37Z
2025-05-14T10:45:10Z
2025-05-12T15:14:08Z
ringohoffman
NONE
https://github.com/huggingface/datasets/pull/7434
2025-05-12T15:14:08Z
10
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7434
true
[ "@lhoestq please let me know what you think about this.", "It looks like I can't change the merge target to #7435, so it will look like there is a bunch of extra stuff until #7435 is in main.", "@lhoestq Thanks so much for reviewing #7435! Now that that's merged, I think this PR is ready!! Can you kick off CI w...
2,890,240,400
7,433
`Dataset.map` ignores existing caches and remaps when ran with different `num_proc`
closed
### Describe the bug If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped. ### Steps to reproduce the bug 1. Download a dataset ```python import datase...
true
2025-03-03T05:51:26Z
2025-05-12T15:14:09Z
2025-05-12T15:14:09Z
ringohoffman
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7433
false
[ "This feels related: https://github.com/huggingface/datasets/issues/3044", "@lhoestq This comment specifically, I agree:\n\n* https://github.com/huggingface/datasets/issues/3044#issuecomment-1239877570\n\n> Almost a year later and I'm in a similar boat. Using custom fingerprints and when using multiprocessing the...
2,887,717,289
7,432
Fix type annotation
closed
true
2025-02-28T17:28:20Z
2025-03-04T15:53:03Z
2025-03-04T15:53:03Z
NeilGirdhar
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7432
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7432
true
[ "Thanks ! There is https://github.com/huggingface/datasets/pull/7426 already that fixes the issue, I'm closing your PR if you don't mind" ]
2,887,244,074
7,431
Issues with large Datasets
open
### Describe the bug If the coco annotation file is too large the dataset will not be able to load it, not entirely sure were the issue is but I am guessing it is due to the code trying to load it all as one line into a dataframe. This was for object detections. My current work around is the following code but would ...
true
2025-02-28T14:05:22Z
2025-03-04T15:02:26Z
null
nikitabelooussovbtis
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7431
false
[ "what's the error message ?", "This was the final error message that it was giving pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to string in row 0", "Here is the list of errors:\n\nTraceback (most recent call last):\n File \".venv/lib/python3.12/site-packages/datasets/packaged_modul...
2,886,922,573
7,430
Error in code "Time to slice and dice" from course "NLP Course"
closed
### Describe the bug When we execute code ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() ``` answer should be like this condition | frequency birth control | 27655 dep...
true
2025-02-28T11:36:10Z
2025-03-05T11:32:47Z
2025-03-03T17:52:15Z
Yurkmez
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7430
false
[ "You should open an issue in the NLP course website / github page. I'm closing this issue if you don't mind", "ok, i don't mind, i'll mark the error there" ]
2,886,806,513
7,429
Improved type annotation
open
I've refined several type annotations throughout the codebase to align with current best practices and enhance overall clarity. Given the complexity of the code, there may still be areas that need further attention. I welcome any feedback or suggestions to make these improvements even better. - Fixes #7202
true
2025-02-28T10:39:10Z
2025-05-15T12:27:17Z
null
saiden89
NONE
https://github.com/huggingface/datasets/pull/7429
null
3
2
0
2
false
false
[]
https://github.com/huggingface/datasets/pull/7429
true
[ "@lhoestq Could someone please take a quick look or let me know if there’s anything I should change? Thanks!", "could you fix the conflicts ? I think some type annotations have been improved since your first commit", "It should be good now.\r\nI'm happy to add more annotations or refine further if needed—just ...
2,886,111,651
7,428
Use pyupgrade --py39-plus
closed
true
2025-02-28T03:39:44Z
2025-03-22T00:51:20Z
2025-03-05T15:04:16Z
cyyever
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7428
2025-03-05T15:04:16Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7428
true
[ "Hi ! can you run `make style` to fix code formatting ?", "@lhoestq Fixed", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7428). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,886,032,571
7,427
Error splitting the input into NAL units.
open
### Describe the bug I am trying to finetune qwen2.5-vl on 16 * 80G GPUS, and I use `LLaMA-Factory` and set `preprocessing_num_workers=16`. However, I met the following error and the program seem to got crush. It seems that the error come from `datasets` library The error logging is like following: ```text Convertin...
true
2025-02-28T02:30:15Z
2025-03-04T01:40:28Z
null
MengHao666
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7427
false
[ "First time I see this error :/ maybe it's an issue with your version of `multiprocess` and `dill` ? Make sure they are compatible with `datasets`", "> First time I see this error :/ maybe it's an issue with your version of `multiprocess` and `dill` ? Make sure they are compatible with `datasets`\n\nany recommend...
2,883,754,507
7,426
fix: None default with bool type on load creates typing error
closed
Hello! Pyright flags any use of `load_dataset` as an error, because the default for `trust_remote_code` is `None`, but the function is typed as `bool`, not `Optional[bool]`. I changed the type and docstrings to reflect this, but no other code was touched.
true
2025-02-27T08:11:36Z
2025-03-04T15:53:40Z
2025-03-04T15:53:40Z
stephantul
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7426
2025-03-04T15:53:40Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7426
true
[]
2,883,684,686
7,425
load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable
open
### Describe the bug from datasets import load_dataset lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") or configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True) both error: Traceback (most recent call last): File "", line 1, in File...
true
2025-02-27T07:36:02Z
2025-03-27T05:05:33Z
null
dshwei
NONE
null
null
10
1
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7425
false
[ "> datasets\n\nHi, have you solved this bug? Today I also met the same problem about `livecodebench/code_generation_lite` when evaluating the `Open-R1` repo. I am looking forward to your reply!\n\n![Image](https://github.com/user-attachments/assets/02e92fbf-da33-41b3-b8d4-f79b293a54f1)", "Hey guys,\nI tried to re...
2,882,663,621
7,424
Faster folder based builder + parquet support + allow repeated media + use torchvideo
closed
This will be useful for LeRobotDataset (robotics datasets for [lerobot](https://github.com/huggingface/lerobot) based on videos) Impacted builders: - ImageFolder - AudioFolder - VideoFolder Improvements: - faster to stream (got a 5x speed up on an image dataset) - improved RAM usage - support for metadata.p...
true
2025-02-26T19:55:18Z
2025-03-05T18:51:00Z
2025-03-05T17:41:23Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7424
2025-03-05T17:41:22Z
1
2
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7424
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7424). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,879,271,409
7,423
Row indexing a dataset with numpy integers
open
### Feature request Allow indexing datasets with a scalar numpy integer type. ### Motivation Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type` ``` python def key_to_query_type(key: Union[int, slice, range, str, Ite...
true
2025-02-25T18:44:45Z
2025-03-03T17:55:24Z
null
DavidRConnell
NONE
null
null
1
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7423
false
[ "Would be cool to be consistent when it comes to indexing with numpy objects, if we do accept numpy arrays we should indeed accept numpy integers. Your idea sounds reasonable, I'd also be in favor of adding a simple test as well" ]
2,878,369,052
7,421
DVC integration broken
open
### Describe the bug The DVC integration seems to be broken. Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface ### Steps to reproduce the bug #### Script to reproduce ~~~python from datasets import load_dataset dataset = load_dataset( "csv", data_files="dvc://workshop/satellite-d...
true
2025-02-25T13:14:31Z
2025-03-03T17:42:02Z
null
maxstrobel
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7421
false
[ "Unfortunately `url` is a reserved argument in `fsspec.url_to_fs`, so ideally file system implementations like DVC should use another argument name to avoid this kind of errors" ]
2,876,281,928
7,420
better correspondence between cached and saved datasets created using from_generator
open
### Feature request At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a...
true
2025-02-24T22:14:37Z
2025-02-26T03:10:22Z
null
vttrifonov
CONTRIBUTOR
null
null
0
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7420
false
[]
2,875,635,320
7,419
Import order crashes script execution
open
### Describe the bug Hello, I'm trying to convert an HF dataset into a TFRecord so I'm importing `tensorflow` and `datasets` to do so. Depending in what order I'm importing those librairies, my code hangs forever and is unkillable (CTRL+C doesn't work, I need to kill my shell entirely). Thank you for your help 🙏 ...
true
2025-02-24T17:03:43Z
2025-02-24T17:03:43Z
null
DamienMatias
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7419
false
[]
2,868,701,471
7,418
pyarrow.lib.arrowinvalid: cannot mix list and non-list, non-null values with map function
open
### Describe the bug Encounter pyarrow.lib.arrowinvalid error with map function in some example when loading the dataset ### Steps to reproduce the bug ``` from datasets import load_dataset from PIL import Image, PngImagePlugin dataset = load_dataset("leonardPKU/GEOQA_R1V_Train_8K") system_prompt="You are a helpful...
true
2025-02-21T10:58:06Z
2025-02-25T15:26:46Z
null
alexxchen
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7418
false
[ "@lhoestq ", "Can you try passing text: None for the image object ? Pyarrow expects all the objects to have the exact same type, in particular the dicttionaries in \"content\" should all have the keys \"type\" and \"text\"", "The following modification on system prompt works, but it is different from the usual ...
2,866,868,922
7,417
set dev version
closed
true
2025-02-20T17:45:29Z
2025-02-20T17:47:50Z
2025-02-20T17:45:36Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7417
2025-02-20T17:45:36Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7417
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7417). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,866,862,143
7,416
Release: 3.3.2
closed
true
2025-02-20T17:42:11Z
2025-02-20T17:44:35Z
2025-02-20T17:43:28Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7416
2025-02-20T17:43:28Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7416
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7416). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,865,774,546
7,415
Shard Dataset at specific indices
open
I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from...
true
2025-02-20T10:43:10Z
2025-02-24T11:06:45Z
null
nikonikolov
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7415
false
[ "Hi ! if it's an option I'd suggest to have one sequence per row instead.\n\nOtherwise you'd have to make your own save/load mechanism", "Saving one sequence per row is very difficult and heavy and makes all the optimizations pointless. How would a custom save/load mechanism look like?", "You can use `pyarrow` ...
2,863,798,756
7,414
Gracefully cancel async tasks
closed
true
2025-02-19T16:10:58Z
2025-02-20T14:12:26Z
2025-02-20T14:12:23Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7414
2025-02-20T14:12:23Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7414
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7414). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,860,947,582
7,413
Documentation on multiple media files of the same type with WebDataset
open
The [current documentation](https://huggingface.co/docs/datasets/en/video_dataset) on a creating a video dataset includes only examples with one media file and one json. It would be useful to have examples where multiple files of the same type are included. For example, in a sign language dataset, you may have a base v...
true
2025-02-18T16:13:20Z
2025-02-20T14:17:54Z
null
DCNemesis
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7413
false
[ "Yes this is correct and it works with huggingface datasets as well ! Feel free to include an example here: https://github.com/huggingface/datasets/blob/main/docs/source/video_dataset.mdx" ]
2,859,433,710
7,412
Index Error Invalid Ket is out of bounds for size 0 for code-search-net/code_search_net dataset
open
### Describe the bug I am trying to do model pruning on sentence-transformers/all-mini-L6-v2 for the code-search-net/code_search_net dataset using INCTrainer class However I am getting below error ``` raise IndexError(f"Invalid Key: {key is our of bounds for size {size}") IndexError: Invalid key: 1840208 is out of b...
true
2025-02-18T05:58:33Z
2025-02-18T06:42:07Z
null
harshakhmk
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7412
false
[]
2,858,993,390
7,411
Attempt to fix multiprocessing hang by closing and joining the pool before termination
closed
https://github.com/huggingface/datasets/issues/6393 has plagued me on and off for a very long time. I have had various workarounds (one time combining two filter calls into one filter call removed the issue, another time making rank 0 go first resolved a cache race condition, one time i think upgrading the version of s...
true
2025-02-17T23:58:03Z
2025-02-19T21:11:24Z
2025-02-19T13:40:32Z
dakinggg
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7411
2025-02-19T13:40:32Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7411
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7411). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for the fix! We have been affected by this a lot when we try to use LLM Foundry ...
2,858,085,707
7,410
Set dev version
closed
true
2025-02-17T14:54:39Z
2025-02-17T14:56:58Z
2025-02-17T14:54:56Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7410
2025-02-17T14:54:56Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7410
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7410). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,858,079,508
7,409
Release: 3.3.1
closed
true
2025-02-17T14:52:12Z
2025-02-17T14:54:32Z
2025-02-17T14:53:13Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7409
2025-02-17T14:53:13Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7409
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7409). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,858,012,313
7,408
Fix filter speed regression
closed
close https://github.com/huggingface/datasets/issues/7404
true
2025-02-17T14:25:32Z
2025-02-17T14:28:48Z
2025-02-17T14:28:46Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7408
2025-02-17T14:28:46Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7408
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7408). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,856,517,442
7,407
Update use_with_pandas.mdx: to_pandas() correction in last section
closed
last section ``to_pandas()"
true
2025-02-17T01:53:31Z
2025-02-20T17:28:04Z
2025-02-20T17:28:04Z
ibarrien
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7407
2025-02-20T17:28:04Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7407
true
[]
2,856,441,206
7,406
Adding Core Maintainer List to CONTRIBUTING.md
closed
### Feature request I propose adding a core maintainer list to the `CONTRIBUTING.md` file. ### Motivation The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module. However, the Datasets project doesn't have such a list. ### Your contribution I have nothing to add here.
true
2025-02-17T00:32:40Z
2025-03-24T10:57:54Z
2025-03-24T10:57:54Z
jp1924
CONTRIBUTOR
null
null
3
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7406
false
[ "@lhoestq", "there is no per-module maintainer and the list is me alone nowadays ^^'", "@lhoestq \nOh... I feel for you. \nWhat are your criteria for choosing a core maintainer? \nIt seems like it's too much work for you to manage all this code by yourself.\n\nAlso, if you don't mind, can you check this PR for ...
2,856,372,814
7,405
Lazy loading of environment variables
open
### Describe the bug Loading a `.env` file after an `import datasets` call does not correctly use the environment variables. This is due the fact that environment variables are read at import time: https://github.com/huggingface/datasets/blob/de062f0552a810c52077543c1169c38c1f0c53fc/src/datasets/config.py#L155C1-L15...
true
2025-02-16T22:31:41Z
2025-02-17T15:17:18Z
null
nikvaessen
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7405
false
[ "Many python packages out there, including `huggingface_hub`, do load the environment variables on import.\nYou should `load_dotenv()` before importing the libraries.\n\nFor example you can move all you imports inside your `main()` function" ]
2,856,366,207
7,404
Performance regression in `dataset.filter`
closed
### Describe the bug We're filtering dataset of ~1M (small-ish) records. At some point in the code we do `dataset.filter`, before (including 3.2.0) it was taking couple of seconds, and now it takes 4 hours. We use 16 threads/workers, and stack trace at them look as follows: ``` Traceback (most recent call last): Fi...
true
2025-02-16T22:19:14Z
2025-02-17T17:46:06Z
2025-02-17T14:28:48Z
ttim
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7404
false
[ "Thanks for reporting, I'll fix the regression today", "I just released `datasets` 3.3.1 with a fix, let me know if it's good now :)", "@lhoestq it fixed the issue.\n\nThis was (very) fast, thank you very much!" ]
2,855,880,858
7,402
Fix a typo in arrow_dataset.py
closed
"in the feature" should be "in the future"
true
2025-02-16T04:52:02Z
2025-02-20T17:29:28Z
2025-02-20T17:29:28Z
jingedawang
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7402
2025-02-20T17:29:28Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7402
true
[]
2,853,260,869
7,401
set dev version
closed
true
2025-02-14T10:17:03Z
2025-02-14T10:19:20Z
2025-02-14T10:17:13Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7401
2025-02-14T10:17:13Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7401
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7401). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,853,098,442
7,399
Synchronize parameters for various datasets
open
### Describe the bug [IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_refe...
true
2025-02-14T09:15:11Z
2025-02-19T11:50:29Z
null
grofte
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7399
false
[ "Hi ! the `desc` parameter is only available for Dataset / DatasetDict for the progress bar of `map()``\n\nSince IterableDataset only runs the map functions when you iterate over the dataset, there is no progress bar and `desc` is useless. We could still add the argument for parity but it wouldn't be used for anyth...
2,853,097,869
7,398
Release: 3.3.0
closed
true
2025-02-14T09:15:03Z
2025-02-14T09:57:39Z
2025-02-14T09:57:37Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7398
2025-02-14T09:57:37Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7398
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7398). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,852,829,763
7,397
Kannada dataset(Conversations, Wikipedia etc)
closed
true
2025-02-14T06:53:03Z
2025-02-20T17:28:54Z
2025-02-20T17:28:53Z
Likhith2612
NONE
https://github.com/huggingface/datasets/pull/7397
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7397
true
[ "Hi ! feel free to uplad the CSV on https://huggingface.co/datasets :)\r\n\r\nwe don't store the datasets' data in this github repository" ]
2,853,201,277
7,400
504 Gateway Timeout when uploading large dataset to Hugging Face Hub
open
### Description I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error. I will continue trying to upload. While it might succeed in future attempts, I wanted to report...
true
2025-02-14T02:18:35Z
2025-02-14T23:48:36Z
null
hotchpotch
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7400
false
[ "I transferred to the `datasets` repository. Is there any retry mechanism in `datasets` @lhoestq ?\n\nAnother solution @hotchpotch if you want to get your dataset pushed to the Hub in a robust way is to save it to a local folder first and then use `huggingface-cli upload-large-folder` (see https://huggingface.co/do...
2,851,716,755
7,396
Update README.md
closed
true
2025-02-13T17:44:36Z
2025-02-13T17:46:57Z
2025-02-13T17:44:51Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7396
2025-02-13T17:44:51Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7396
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7396). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,851,575,160
7,395
Update docs
closed
- update min python version - replace canonical dataset names with new names - avoid examples with trust_remote_code
true
2025-02-13T16:43:15Z
2025-02-13T17:20:32Z
2025-02-13T17:20:30Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7395
2025-02-13T17:20:29Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7395
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7395). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,847,172,115
7,394
Using load_dataset with data_files and split arguments yields an error
open
### Describe the bug It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument. If I run ```python from datasets import load_dataset load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl") ``` then I get the error ``` Va...
true
2025-02-12T04:50:11Z
2025-02-12T04:50:11Z
null
devon-research
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7394
false
[]
2,846,446,674
7,393
Optimized sequence encoding for scalars
closed
The change in https://github.com/huggingface/datasets/pull/3197 introduced redundant list-comprehensions when `obj` is a long sequence of scalars. This becomes a noticeable overhead when loading data from an `IterableDataset` in the function `_apply_feature_types_on_example` and can be eliminated by adding a check for ...
true
2025-02-11T20:30:44Z
2025-02-13T17:11:33Z
2025-02-13T17:11:32Z
lukasgd
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7393
2025-02-13T17:11:32Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7393
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7393). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,846,095,043
7,392
push_to_hub payload too large error when using large ClassLabel feature
open
### Describe the bug When using `datasets.DatasetDict.push_to_hub` an `HfHubHTTPError: 413 Client Error: Payload Too Large for url` is raised if the dataset contains a large `ClassLabel` feature. Even if the total size of the dataset is small. ### Steps to reproduce the bug ``` python import random import sys impor...
true
2025-02-11T17:51:34Z
2025-02-11T18:01:31Z
null
DavidRConnell
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7392
false
[ "See also <https://discuss.huggingface.co/t/datasetdict-push-to-hub-failing-with-payload-to-large/140083/8>\n" ]
2,845,184,764
7,391
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
open
pyarrow 尝试了若干个版本都不可以
true
2025-02-11T12:02:26Z
2025-02-11T12:02:26Z
null
LinXin04
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7391
false
[]
2,843,813,365
7,390
Re-add py.typed
open
### Feature request The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here? ### Motivation MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be goo...
true
2025-02-10T22:12:52Z
2025-02-10T22:12:52Z
null
NeilGirdhar
CONTRIBUTOR
null
null
0
7
7
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7390
false
[]
2,843,592,606
7,389
Getting statistics about filtered examples
closed
@lhoestq wondering if the team has thought about this and if there are any recommendations? Currently when processing datasets some examples are bound to get filtered out, whether it's due to bad format, or length is too long, or any other custom filters that might be getting applied. Let's just focus on the filter by...
true
2025-02-10T20:48:29Z
2025-02-11T20:44:15Z
2025-02-11T20:44:13Z
jonathanasdf
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7389
false
[ "You can actually track a running sum in map() or filter() :)\n\n```python\nnum_filtered = 0\n\ndef f(x):\n global num_filtered\n condition = len(x[\"text\"]) < 1000\n if not condition:\n num_filtered += 1\n return condition\n\nds = ds.filter(f)\nprint(num_filtered)\n```\n\nand if you want to use...
2,843,188,499
7,388
OSError: [Errno 22] Invalid argument forbidden character
closed
### Describe the bug I'm on Windows and i'm trying to load a datasets but i'm having title error because files in the repository are named with charactere like < >which can't be in a name file. Could it be possible to load this datasets but removing those charactere ? ### Steps to reproduce the bug load_dataset("CAT...
true
2025-02-10T17:46:31Z
2025-02-11T13:42:32Z
2025-02-11T13:42:30Z
langflogit
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7388
false
[ "You can probably copy the dataset in your HF account and rename the files (without having to download them to your disk). Or alternatively feel free to open a Pull Request to this dataset with the renamed file", "Thank you, that will help me work around this problem" ]
2,841,228,048
7,387
Dynamic adjusting dataloader sampling weight
open
Hi, Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again.
true
2025-02-10T03:18:47Z
2025-03-07T14:06:54Z
null
whc688
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7387
false
[ "You mean based on a condition that has to be checked on-the-fly during training ? Otherwise if you know in advance after how many samples you need to change the sampling you can simply concatenate the two mixes", "Yes, like during training, if one data sample's prediction is consistently wrong, its sampling weig...
2,840,032,524
7,386
Add bookfolder Dataset Builder for Digital Book Formats
closed
### Feature request This feature proposes adding a new dataset builder called bookfolder to the datasets library. This builder would allow users to easily load datasets consisting of various digital book formats, including: AZW, AZW3, CB7, CBR, CBT, CBZ, EPUB, MOBI, and PDF. ### Motivation Currently, loading dataset...
true
2025-02-08T14:27:55Z
2025-02-08T14:30:10Z
2025-02-08T14:30:09Z
shikanime
NONE
null
null
1
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7386
false
[ "On second thought, probably not a good idea." ]
2,830,664,522
7,385
Make IterableDataset (optionally) resumable
open
### What does this PR do? This PR introduces a new `stateful` option to the `dataset.shuffle` method, which defaults to `False`. When enabled, this option allows for resumable shuffling of `IterableDataset` instances, albeit with some additional memory overhead. Key points: * All tests have passed * Docstrings ...
true
2025-02-04T15:55:33Z
2025-03-03T17:31:40Z
null
yzhangcs
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7385
null
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7385
true
[ "@lhoestq Hi again~ Just circling back on this\r\nWondering if there’s anything I can do to help move this forward. 🤗 \r\nThanks!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7385). All of your documentation changes will be reflected on that endpoint. The docs are avai...
2,828,208,828
7,384
Support async functions in map()
closed
e.g. to download images or call an inference API like HF or vLLM ```python import asyncio import random from datasets import Dataset async def f(x): await asyncio.sleep(random.random()) ds = Dataset.from_dict({"data": range(100)}) ds.map(f) # Map: 100%|█████████████████████████████| 100/100 [00:0...
true
2025-02-03T18:18:40Z
2025-02-13T14:01:13Z
2025-02-13T14:00:06Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7384
2025-02-13T14:00:06Z
2
3
0
3
false
false
[]
https://github.com/huggingface/datasets/pull/7384
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7384). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "example of what you can do with it:\r\n\r\n```python\r\nimport aiohttp\r\nfrom huggingf...
2,823,480,924
7,382
Add Pandas, PyArrow and Polars docs
closed
(also added the missing numpy docs and fixed a small bug in pyarrow formatting)
true
2025-01-31T13:22:59Z
2025-01-31T16:30:59Z
2025-01-31T16:30:57Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7382
2025-01-31T16:30:57Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7382
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7382). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,815,649,092
7,381
Iterating over values of a column in the IterableDataset
closed
### Feature request I would like to be able to iterate (and re-iterate if needed) over a column of an `IterableDataset` instance. The following example shows the supposed API: ```python def gen(): yield {"text": "Good", "label": 0} yield {"text": "Bad", "label": 1} ds = IterableDataset.from_generator(gen) tex...
true
2025-01-28T13:17:36Z
2025-05-22T18:00:04Z
2025-05-22T18:00:04Z
TopCoder2K
CONTRIBUTOR
null
null
11
1
1
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7381
false
[ "I'd be in favor of that ! I saw many people implementing their own iterables that wrap a dataset just to iterate on a single column, that would make things more practical.\n\nKinda related: https://github.com/huggingface/datasets/issues/5847", "(For anyone's information, I'm going on vacation for the next 3 week...
2,811,566,116
7,380
fix: dill default for version bigger 0.3.8
closed
Fixes def log for dill version >= 0.3.9 https://pypi.org/project/dill/ This project uses dill with the release of version 0.3.9 the datasets lib.
true
2025-01-26T13:37:16Z
2025-03-13T20:40:19Z
2025-03-13T20:40:19Z
sam-hey
NONE
https://github.com/huggingface/datasets/pull/7380
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7380
true
[ "`datasets` doesn't support `dill` 0.3.9 yet afaik since `dill` made some changes related to the determinism of dumps\r\n\r\nIt would be cool to investigate (maybe run the `datasets` test) with recent `dill` to see excactly what breaks and if we can make `dill` 0.3.9 work with `datasets`" ]
2,802,957,388
7,378
Allow pushing config version to hub
open
### Feature request Currently, when datasets are created, they can be versioned by passing the `version` argument to `load_dataset(...)`. For example creating `outcomes.csv` on the command line ``` echo "id,value\n1,0\n2,0\n3,1\n4,1\n" > outcomes.csv ``` and creating it ``` import datasets dataset = datasets.load_dat...
true
2025-01-21T22:35:07Z
2025-01-30T13:56:56Z
null
momeara
NONE
null
null
1
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7378
false
[ "Hi ! This sounds reasonable to me, feel free to open a PR :)" ]
2,802,723,285
7,377
Support for sparse arrays with the Arrow Sparse Tensor format?
open
### Feature request AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**. Arrow has support for sparse tensors. https://arrow.apache.org/docs/format/Other.html#sparse-tensor It would be ...
true
2025-01-21T20:14:35Z
2025-01-30T14:06:45Z
null
JulesGM
NONE
null
null
1
4
2
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7377
false
[ "Hi ! Unfortunately the Sparse Tensor structure in Arrow is not part of the Arrow format (yes it's confusing...), so it's not possible to use it in `datasets`. It's a separate structure that doesn't correspond to any type or extension type in Arrow.\n\nThe Arrow community recently added an extension type for fixed ...
2,802,621,104
7,376
[docs] uv install
closed
Proposes adding uv to installation docs (see Slack thread [here](https://huggingface.slack.com/archives/C01N44FJDHT/p1737377177709279) for more context) if you're interested!
true
2025-01-21T19:15:48Z
2025-03-14T20:16:35Z
2025-03-14T20:16:35Z
stevhliu
MEMBER
https://github.com/huggingface/datasets/pull/7376
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7376
true
[]
2,800,609,218
7,375
vllm批量推理报错
open
### Describe the bug ![Image](https://github.com/user-attachments/assets/3d958e43-28dc-4467-9333-5990c7af3b3f) ### Steps to reproduce the bug ![Image](https://github.com/user-attachments/assets/3067eeca-a54d-4956-b0fd-3fc5ea93dabb) ### Expected behavior ![Image](https://github.com/user-attachments/assets/77d32936-...
true
2025-01-21T03:22:23Z
2025-01-30T14:02:40Z
null
YuShengzuishuai
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7375
false
[ "Make sure you have installed a recent version of `soundfile`" ]
2,793,442,320
7,374
Remove .h5 from imagefolder extensions
closed
the format is not relevant for imagefolder, and makes the viewer fail to process datasets on HF (so many that the viewer takes more time to process new datasets)
true
2025-01-16T18:17:24Z
2025-01-16T18:26:40Z
2025-01-16T18:26:38Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7374
2025-01-16T18:26:38Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7374
true
[]
2,793,237,139
7,373
Excessive RAM Usage After Dataset Concatenation concatenate_datasets
open
### Describe the bug When loading a dataset from disk, concatenating it, and starting the training process, the RAM usage progressively increases until the kernel terminates the process due to excessive memory consumption. https://github.com/huggingface/datasets/issues/2276 ### Steps to reproduce the bug ```python ...
true
2025-01-16T16:33:10Z
2025-03-27T17:40:59Z
null
sam-hey
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7373
false
[ "![Image](https://github.com/user-attachments/assets/b6f8bcbd-44af-413e-bc06-65380eb0f746)\n\n![Image](https://github.com/user-attachments/assets/a241fcd8-4b62-495c-926c-685f82015dfb)\n\nAdding a img from memray\nhttps://gist.github.com/sam-hey/00c958f13fb0f7b54d17197fe353002f", "I'm having the same issue where c...
2,791,760,968
7,372
Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets
open
### Description I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue: #### Code 1: Using `load_dataset` ```python from datasets import Dataset, load_dataset # First save with max_shard_size=10 Dataset.fr...
true
2025-01-16T05:47:20Z
2025-01-16T05:47:20Z
null
gaohongkui
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7372
false
[]
2,790,549,889
7,371
500 Server error with pushing a dataset
open
### Describe the bug Suddenly, I started getting this error message saying it was an internal error. `Error creating/pushing dataset: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-...
true
2025-01-15T18:23:02Z
2025-01-15T20:06:05Z
null
martinmatak
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7371
false
[ "EDIT: seems to be all good now. I'll add a comment if the error happens again within the next 48 hours. If it doesn't, I'll just close the topic." ]
2,787,972,786
7,370
Support faster processing using pandas or polars functions in `IterableDataset.map()`
closed
Following the polars integration :) Allow super fast processing using pandas or polars functions in `IterableDataset.map()` by adding support to pandas and polars formatting in `IterableDataset` ```python import polars as pl from datasets import Dataset ds = Dataset.from_dict({"i": range(10)}).to_iterable_da...
true
2025-01-14T18:14:13Z
2025-01-31T11:08:15Z
2025-01-30T13:30:57Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7370
2025-01-30T13:30:57Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7370
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7370). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "merging this and will make some docs and communications around using polars for optimiz...
2,787,193,238
7,369
Importing dataset gives unhelpful error message when filenames in metadata.csv are not found in the directory
open
### Describe the bug While importing an audiofolder dataset, where the names of the audiofiles don't correspond to the filenames in the metadata.csv, we get an unclear error message that is not helpful for the debugging, i.e. ``` ValueError: Instruction "train" corresponds to no data! ``` ### Steps to reproduce the ...
true
2025-01-14T13:53:21Z
2025-01-14T15:05:51Z
null
svencornetsdegroot
NONE
null
null
1
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/7369
false
[ "I'd prefer even more verbose errors; like `\"file123.mp3\" is referenced in metadata.csv, but not found in the data directory '/path/to/audiofolder' ! (and 100+ more missing files)` Or something along those lines." ]
2,784,272,477
7,368
Add with_split to DatasetDict.map
closed
#7356
true
2025-01-13T15:09:56Z
2025-03-08T05:45:02Z
2025-03-07T14:09:52Z
jp1924
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7368
2025-03-07T14:09:52Z
9
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7368
true
[ "Can you check this out, @lhoestq?", "cc @lhoestq @albertvillanova ", "@lhoestq\r\n", "@lhoestq\r\n", "@lhoestq", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7368). All of your documentation changes will be reflected on that endpoint. The docs are available until...
2,781,522,894
7,366
Dataset.from_dict() can't handle large dict
open
### Describe the bug I have 26,000,000 3-tuples. When I use Dataset.from_dict() to load, neither. py nor Jupiter notebook can run successfully. This is my code: ``` # len(example_data) is 26,000,000, 'diff' is a text diff1_list = [example_data[i].texts[0] for i in range(len(example_data))] diff2_list =...
true
2025-01-11T02:05:21Z
2025-01-11T02:05:21Z
null
CSU-OSS
NONE
null
null
0
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/7366
false
[]
2,780,216,199
7,365
A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas()
open
### Describe the bug I am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it ha...
true
2025-01-10T13:39:33Z
2025-01-10T13:39:33Z
null
NourOM02
NONE
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7365
false
[]
2,776,929,268
7,364
API endpoints for gated dataset access requests
closed
### Feature request I would like a programatic way of requesting access to gated datasets. The current solution to gain access forces me to visit a website and physically click an "agreement" button (as per the [documentation](https://huggingface.co/docs/hub/en/datasets-gated#access-gated-datasets-as-a-user)). An i...
true
2025-01-09T06:21:20Z
2025-01-09T11:17:40Z
2025-01-09T11:17:20Z
jerome-white
NONE
null
null
3
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7364
false
[ "Looks like a [similar feature request](https://github.com/huggingface/huggingface_hub/issues/1198) was made to the HF Hub team. Is handling this at the Hub level more appropriate?\r\n\r\n(As an aside, I've gotten the [HTTP-based solution](https://github.com/huggingface/huggingface_hub/issues/1198#issuecomment-1905...
2,774,090,012
7,363
ImportError: To support decoding images, please install 'Pillow'.
open
### Describe the bug Following this tutorial locally using a macboko and VSCode: https://huggingface.co/docs/diffusers/en/tutorials/basic_training This line of code: for i, image in enumerate(dataset[:4]["image"]): throws: ImportError: To support decoding images, please install 'Pillow'. Pillow is installed. ###...
true
2025-01-08T02:22:57Z
2025-05-28T14:56:53Z
null
jamessdixon
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7363
false
[ "what's your `pip show Pillow` output", "same issue.. my pip show Pillow output as below:\n\n```\nName: pillow\nVersion: 11.1.0\nSummary: Python Imaging Library (Fork)\nHome-page: https://python-pillow.github.io/\nAuthor: \nAuthor-email: \"Jeffrey A. Clark\" <aclark@aclark.net>\nLicense: MIT-CMU\nLocation: [/opt/...
2,773,731,829
7,362
HuggingFace CLI dataset download raises error
closed
### Describe the bug Trying to download Hugging Face datasets using Hugging Face CLI raises error. This error only started after December 27th, 2024. For example: ``` huggingface-cli download --repo-type dataset gboleda/wikicorpus Traceback (most recent call last): File "/home/ubuntu/test_venv/bin/huggingface...
true
2025-01-07T21:03:30Z
2025-01-08T15:00:37Z
2025-01-08T14:35:52Z
ajayvohra2005
NONE
null
null
3
3
3
0
null
false
[]
https://github.com/huggingface/datasets/issues/7362
false
[ "I got the same error and was able to resolve it by upgrading from 2.15.0 to 3.2.0.", "> I got the same error and was able to resolve it by upgrading from 2.15.0 to 3.2.0.\r\n\r\nWhat is needed is upgrading `huggingface-hub==0.27.1`. `datasets` does not appear to have anything to do with the error. The upgrade is...
2,771,859,244
7,361
Fix lock permission
open
All files except lock file have proper permission obeying `ACL` property if it is set. If the cache directory has `ACL` property, it should be respected instead of just using `umask` for permission. To fix it, just create a lock file and pass the created `mode`. By creating a lock file with `touch()` before `Fil...
true
2025-01-07T04:15:53Z
2025-01-07T04:49:46Z
null
cih9088
NONE
https://github.com/huggingface/datasets/pull/7361
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7361
true
[]
2,771,751,406
7,360
error when loading dataset in Hugging Face: NoneType error is not callable
open
### Describe the bug I met an error when running a notebook provide by Hugging Face, and met the error. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[2], line 5 3 # Load the enhancers dat...
true
2025-01-07T02:11:36Z
2025-02-24T13:32:52Z
null
nanu23333
NONE
null
null
5
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7360
false
[ "Hi ! I couldn't reproduce on my side, can you try deleting your cache at `~/.cache/huggingface/modules/datasets_modules/datasets/InstaDeepAI--nucleotide_transformer_downstream_tasks_revised` and try again ? For some reason `datasets` wasn't able to find the DatasetBuilder class in the python script of this dataset...
2,771,137,842
7,359
There are multiple 'mteb/arguana' configurations in the cache: default, corpus, queries with HF_HUB_OFFLINE=1
open
### Describe the bug Hey folks, I am trying to run this code - ```python from datasets import load_dataset, get_dataset_config_names ds = load_dataset("mteb/arguana") ``` with HF_HUB_OFFLINE=1 But I get the following error - ```python Using the latest cached version of the dataset since mteb/arguana...
true
2025-01-06T17:42:49Z
2025-01-06T17:43:31Z
null
Bhavya6187
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7359
false
[ "Related to https://github.com/embeddings-benchmark/mteb/issues/1714" ]
2,770,927,769
7,358
Fix remove_columns in the formatted case
open
`remove_columns` had no effect when running a function in `.map()` on dataset that is formatted This aligns the logic of `map()` with the non formatted case and also with with https://github.com/huggingface/datasets/pull/7353
true
2025-01-06T15:44:23Z
2025-01-06T15:46:46Z
null
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7358
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7358
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7358). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,770,456,127
7,357
Python process aborded with GIL issue when using image dataset
open
### Describe the bug The issue is visible only with the latest `datasets==3.2.0`. When using image dataset the Python process gets aborted right before the exit with the following error: ``` Fatal Python error: PyGILState_Release: thread state 0x7fa1f409ade0 must be current when releasing Python runtime state: f...
true
2025-01-06T11:29:30Z
2025-03-08T15:59:36Z
null
AlexKoff88
NONE
null
null
1
3
3
0
null
false
[]
https://github.com/huggingface/datasets/issues/7357
false
[ "The issue seems to come from `pyarrow`, I opened an issue on their side at https://github.com/apache/arrow/issues/45214" ]
2,770,095,103
7,356
How about adding a feature to pass the key when performing map on DatasetDict?
closed
### Feature request Add a feature to pass the key of the DatasetDict when performing map ### Motivation I often preprocess using map on DatasetDict. Sometimes, I need to preprocess train and valid data differently depending on the task. So, I thought it would be nice to pass the key (like train, valid) when perf...
true
2025-01-06T08:13:52Z
2025-03-24T10:57:47Z
2025-03-24T10:57:47Z
jp1924
CONTRIBUTOR
null
null
6
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7356
false
[ "@lhoestq \r\nIf it's okay with you, can I work on this?", "Hi ! Can you give an example of what it would look like to use this new feature ?\r\n\r\nNote that currently you can already do\r\n\r\n```python\r\nds[\"train\"] = ds[\"train\"].map(process_train)\r\nds[\"test\"] = ds[\"test\"].map(process_test)\r\n```",...
2,768,958,211
7,355
Not available datasets[audio] on python 3.13
open
### Describe the bug This is the error I got, it seems numba package does not support python 3.13 PS C:\Users\sergi\Documents> pip install datasets[audio] Defaulting to user installation because normal site-packages is not writeable Collecting datasets[audio] Using cached datasets-3.2.0-py3-none-any.whl.metada...
true
2025-01-04T18:37:08Z
2025-01-10T10:46:00Z
null
sergiosinlimites
NONE
null
null
1
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/7355
false
[ "It looks like an issue with `numba` which can't be installed on 3.13 ? `numba` is a dependency of `librosa`, used to decode audio files" ]
2,768,955,917
7,354
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
closed
### Describe the bug Following this tutorial: https://huggingface.co/docs/diffusers/en/tutorials/basic_training and running it locally using VSCode on my MacBook. The first line in the tutorial fails: from datasets import load_dataset dataset = load_dataset('huggan/smithsonian_butterflies_subset', split="train"). w...
true
2025-01-04T18:30:17Z
2025-01-08T02:20:58Z
2025-01-08T02:20:58Z
jamessdixon
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7354
false
[ "recreated .venv and run this: pip install diffusers[training]==0.11.1" ]
2,768,484,726
7,353
changes to MappedExamplesIterable to resolve #7345
closed
modified `MappedExamplesIterable` and `test_iterable_dataset.py::test_mapped_examples_iterable_with_indices` fix #7345 @lhoestq
true
2025-01-04T06:01:15Z
2025-01-07T11:56:41Z
2025-01-07T11:56:41Z
vttrifonov
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/7353
2025-01-07T11:56:41Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7353
true
[ "I noticed that `Dataset.map` has a more complex output depending on `remove_columns`. In particular [this](https://github.com/huggingface/datasets/blob/6457be66e2ef88411281eddc4e7698866a3977f1/src/datasets/arrow_dataset.py#L3371) line removes columns from output if the input is being modified in place (i.e. `input...
2,767,763,850
7,352
fsspec 2024.12.0
closed
true
2025-01-03T15:32:25Z
2025-01-03T15:34:54Z
2025-01-03T15:34:11Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7352
2025-01-03T15:34:11Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7352
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7352). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,767,731,707
7,350
Bump hfh to 0.24 to fix ci
closed
true
2025-01-03T15:09:40Z
2025-01-03T15:12:17Z
2025-01-03T15:10:27Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7350
2025-01-03T15:10:27Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7350
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7350). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,767,670,454
7,349
Webdataset special columns in last position
closed
Place columns "__key__" and "__url__" in last position in the Dataset Viewer since they are not the main content before: <img width="1012" alt="image" src="https://github.com/user-attachments/assets/b556c1fe-2674-4ba0-9643-c074aa9716fd" />
true
2025-01-03T14:32:15Z
2025-01-03T14:34:39Z
2025-01-03T14:32:30Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7349
2025-01-03T14:32:30Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7349
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7349). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,766,128,230
7,348
Catch OSError for arrow
closed
fixes https://github.com/huggingface/datasets/issues/7346 (also updated `ruff` and appleid style changes)
true
2025-01-02T14:30:00Z
2025-01-09T14:25:06Z
2025-01-09T14:25:04Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7348
2025-01-09T14:25:04Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7348
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7348). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,760,282,339
7,347
Converting Arrow to WebDataset TAR Format for Offline Use
closed
### Feature request Hi, I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by: ``` import json from datasets import load_dataset dataset = load_dataset("pixparse/cc3m-wds") dataset.save_to_disk("./cc3m_1") ``` now I need to convert it to WebDataset's TAR form...
true
2024-12-27T01:40:44Z
2024-12-31T17:38:00Z
2024-12-28T15:38:03Z
katie312
NONE
null
null
4
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7347
false
[ "Hi,\r\n\r\nI've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:\r\n\r\nimport json\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"pixparse/cc3m-wds\")\r\ndataset.save_to_disk(\"./cc3m_1\")\r\n\r\n\r\nnow I need to convert it to WebDataset's TAR form...
2,758,752,118
7,346
OSError: Invalid flatbuffers message.
closed
### Describe the bug When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported. When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly. When 2,00...
true
2024-12-25T11:38:52Z
2025-01-09T14:25:29Z
2025-01-09T14:25:05Z
antecede
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7346
false
[ "Thanks for reporting, it looks like an issue with `pyarrow.ipc.open_stream`\r\n\r\nCan you try installing `datasets` from this pull request and see if it helps ? https://github.com/huggingface/datasets/pull/7348", "> Thanks for reporting, it looks like an issue with `pyarrow.ipc.open_stream`\r\n> \r\n> Can you t...
2,758,585,709
7,345
Different behaviour of IterableDataset.map vs Dataset.map with remove_columns
closed
### Describe the bug The following code ```python import datasets as hf ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]]) #ds1 = ds1.to_iterable_dataset() ds2 = ds1.map( lambda i: {'i': i+1}, input_columns = ['i'], remove_columns = ['i'] ) list(ds2) ``` produces ```python [{'i': ...
true
2024-12-25T07:36:48Z
2025-01-07T11:56:42Z
2025-01-07T11:56:42Z
vttrifonov
CONTRIBUTOR
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7345
false
[ "Good catch ! Do you think you can open a PR to fix this issue ?" ]
2,754,735,951
7,344
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs
closed
### Describe the bug I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when ...
true
2024-12-22T16:30:07Z
2025-01-15T05:32:00Z
2025-01-15T05:31:58Z
clankur
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7344
false
[ "Hi ! This is due to your old version of `datasets` which calls HF with `expand=True`, an option that is strongly rate limited.\r\n\r\nRecent versions of `datasets` don't rely on this anymore, you can fix your issue by upgrading `datasets` :)\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nYou can also get maxi...
2,750,525,823
7,343
[Bug] Inconsistent behavior of data_files and data_dir in load_dataset method.
closed
### Describe the bug Inconsistent operation of data_files and data_dir in load_dataset method. ### Steps to reproduce the bug # First I have three files, named 'train.json', 'val.json', 'test.json'. Each one has a simple dict `{text:'aaa'}`. Their path are `/data/train.json`, `/data/val.json`, `/data/test.jso...
true
2024-12-19T14:31:27Z
2025-01-03T15:54:09Z
2025-01-03T15:54:09Z
JasonCZH4
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7343
false
[ "Hi ! `data_files` with a list is equivalent to `data_files={\"train\": data_files}` with a train test only.\r\n\r\nWhen no split are specified, they are inferred based on file names, and files with no apparent split are ignored", "Thanks for your reply!\r\n`files with no apparent split are ignored`. Is there a o...
2,749,572,310
7,342
Update LICENSE
closed
true
2024-12-19T08:17:50Z
2024-12-19T08:44:08Z
2024-12-19T08:44:08Z
eliebak
NONE
https://github.com/huggingface/datasets/pull/7342
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7342
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7342). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,745,658,561
7,341
minor video docs on how to install
closed
true
2024-12-17T18:06:17Z
2024-12-17T18:11:17Z
2024-12-17T18:11:15Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7341
2024-12-17T18:11:14Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7341
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7341). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,745,473,274
7,340
don't import soundfile in tests
closed
true
2024-12-17T16:49:55Z
2024-12-17T16:54:04Z
2024-12-17T16:50:24Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7340
2024-12-17T16:50:24Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7340
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7340). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,745,460,060
7,339
Update CONTRIBUTING.md
closed
true
2024-12-17T16:45:25Z
2024-12-17T16:51:36Z
2024-12-17T16:46:30Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/7339
2024-12-17T16:46:30Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/7339
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7339). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2,744,877,569
7,337
One or several metadata.jsonl were found, but not in the same directory or in a parent directory of
open
### Describe the bug ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. How...
true
2024-12-17T12:58:43Z
2025-01-03T15:28:13Z
null
mst272
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/7337
false
[ "Hmmm I double checked in the source code and I found a contradiction: in the current implementation the metadata file is ignored if it's not in the same archive as the zip image somehow:\r\n\r\nhttps://github.com/huggingface/datasets/blob/caa705e8bf4bedf1a956f48b545283b2ca14170a/src/datasets/packaged_modules/folde...
2,744,746,456
7,336
Clarify documentation or Create DatasetCard
open
### Feature request I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card) - Update the docs to clarify that a Model Card can work for datasets too. - It might be worth c...
true
2024-12-17T12:01:00Z
2024-12-17T12:01:00Z
null
August-murr
NONE
null
null
0
0
0
0
null
false
[ "enhancement" ]
https://github.com/huggingface/datasets/issues/7336
false
[]