id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,027,024,285 | 7,544 | Add try_original_type to DatasetDict.map | closed | This PR resolves #7472 for DatasetDict
The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type`
Cc: @lhoestq | true | 2025-04-29T04:39:44Z | 2025-05-05T14:42:49Z | 2025-05-05T14:42:49Z | yoshitomo-matsubara | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7544 | 2025-05-05T14:42:49Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7544 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Sure! I just committed the changes",
"@lhoestq \r\nLet me know if there are other thi... |
3,026,867,706 | 7,543 | The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.) | closed | ### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_... | true | 2025-04-29T03:04:59Z | 2025-04-30T02:22:17Z | 2025-04-30T02:22:17Z | jxma20 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7543 | false | [] |
3,025,054,630 | 7,542 | set dev version | closed | true | 2025-04-28T14:03:48Z | 2025-04-28T14:08:37Z | 2025-04-28T14:04:00Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7542 | 2025-04-28T14:04:00Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7542 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
3,025,045,919 | 7,541 | release: 3.5.1 | closed | true | 2025-04-28T14:00:59Z | 2025-04-28T14:03:38Z | 2025-04-28T14:01:54Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7541 | 2025-04-28T14:01:54Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7541 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7541). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
3,024,862,966 | 7,540 | support pyarrow 20 | closed | fix
```
TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts'
``` | true | 2025-04-28T13:01:11Z | 2025-04-28T13:23:53Z | 2025-04-28T13:23:52Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7540 | 2025-04-28T13:23:52Z | 1 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/7540 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,023,311,163 | 7,539 | Fix IterableDataset state_dict shard_example_idx counting | closed | # Fix IterableDataset's state_dict shard_example_idx reporting
## Description
This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed.
The issue is in the `_iter_arrow` met... | true | 2025-04-27T20:41:18Z | 2025-05-06T14:24:25Z | 2025-05-06T14:24:24Z | Harry-Yang0518 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7539 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7539 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7539). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! FYI I made a PR to fix https://github.com/huggingface/datasets/issues/7538 and it ... |
3,023,280,056 | 7,538 | `IterableDataset` drops samples when resuming from a checkpoint | closed | When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one ... | true | 2025-04-27T19:34:49Z | 2025-05-06T14:04:05Z | 2025-05-06T14:03:42Z | mariosasko | COLLABORATOR | null | null | 1 | 1 | 1 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/7538 | false | [
"Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable"
] |
3,018,792,966 | 7,537 | `datasets.map(..., num_proc=4)` multi-processing fails | open | The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.ru... | true | 2025-04-25T01:53:47Z | 2025-05-06T13:12:08Z | null | faaany | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7537 | false | [
"related: https://github.com/huggingface/datasets/issues/7510\n\nwe need to do more tests to see if latest `dill` is deterministic"
] |
3,018,425,549 | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | closed | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet... | true | 2025-04-24T20:52:45Z | 2025-05-06T13:05:01Z | 2025-05-06T13:05:01Z | ryan-clancy | CONTRIBUTOR | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7536 | false | [
"It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)",
"> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (usin... |
3,018,289,872 | 7,535 | Change dill version in requirements | open | Change dill version to >=0.3.9,<0.4.5 and check for errors | true | 2025-04-24T19:44:28Z | 2025-05-19T14:51:29Z | null | JGrel | NONE | https://github.com/huggingface/datasets/pull/7535 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7535 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7535). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,017,259,407 | 7,534 | TensorFlow RaggedTensor Support (batch-level) | open | ### Feature request
Hi,
Currently datasets does not support RaggedTensor output on batch-level.
When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV.
Currently there's a error thrown saying that "Nested Data is ... | true | 2025-04-24T13:14:52Z | 2025-05-08T14:13:47Z | null | Lundez | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7534 | false | [
"Keras doesn't support other inputs other than tf.data.Dataset objects ? it's a bit painful to have to support and maintain this kind of integration\n\nIs there a way to use a `datasets.Dataset` with outputs formatted as tensors / ragged tensors instead ? like in https://huggingface.co/docs/datasets/use_with_tensor... |
3,015,075,086 | 7,533 | Add custom fingerprint support to `from_generator` | open | This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function.
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount ... | true | 2025-04-23T19:31:35Z | 2025-06-10T10:13:00Z | null | simonreise | NONE | https://github.com/huggingface/datasets/pull/7533 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7533 | true | [
"This is great !\r\n\r\nWhat do you think of passing `config_id=` directly to the builder instead of just the suffix ? This would be a power user argument though, or for internal use. And in from_generator the new argument can be `fingerprint=` as in `Dataset.__init__()`\r\n\r\nThe `config_id` can be defined using ... |
3,009,546,204 | 7,532 | Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation | closed | This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for datasets ... | true | 2025-04-22T00:23:13Z | 2025-05-06T15:54:38Z | 2025-05-06T15:54:38Z | Harry-Yang0518 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7532 | 2025-05-06T15:54:38Z | 3 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7532 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7532). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Your clarification in your comment at https://github.com/huggingface/datasets/issues/74... |
3,008,914,887 | 7,531 | Deepspeed reward training hangs at end of training with Dataset.from_list | open | There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a s... | true | 2025-04-21T17:29:20Z | 2025-05-06T13:30:41Z | null | Matt00n | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7531 | false | [
"Hi ! How big is the dataset ? if you load it using `from_list`, the dataset lives in memory and has to be copied to every gpu process, which can be slow.\n\nIt's fasted if you load it from JSON files from disk, because in that case the dataset in converted to Arrow and loaded from disk using memory mapping. Memory... |
3,007,452,499 | 7,530 | How to solve "Spaces stuck in Building" problems | closed | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401... | true | 2025-04-21T03:08:38Z | 2025-04-22T07:49:52Z | 2025-04-22T07:49:52Z | ghost | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7530 | false | [
"I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n",
"> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019",
"I'm facing the same issu... |
3,007,118,969 | 7,529 | audio folder builder cannot detect custom split name | open | ### Describe the bug
when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test
### Steps to reproduce the bug
i have the following folder structure
```
my_dataset/
├── train/
│ ├── lorem.wav
│ ├── …
│ └── met... | true | 2025-04-20T16:53:21Z | 2025-04-20T16:53:21Z | null | phineas-pta | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7529 | false | [] |
3,006,433,485 | 7,528 | Data Studio Error: Convert JSONL incorrectly | open | ### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could ... | true | 2025-04-19T13:21:44Z | 2025-05-06T13:18:38Z | null | zxccade | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7528 | false | [
"Hi ! Your JSONL file is incompatible with Arrow / Parquet. Indeed in Arrow / Parquet every dict should have the same keys, while in your dataset the bboxes have varying keys.\n\nThis causes the Data Studio to treat the bboxes as if each row was missing the keys from other rows.\n\nFeel free to take a look at the d... |
3,005,242,422 | 7,527 | Auto-merge option for `convert-to-parquet` | open | ### Feature request
Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool.
### Motivation
Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website.
... | true | 2025-04-18T16:03:22Z | 2025-05-07T12:47:02Z | null | klamike | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7527 | false | [
"Alternatively, there could be an option to switch from submitting PRs to just committing changes directly to `main`.",
"Why not, I'd be in favor of `--merge-pull-request` to call `HfApi().merge_pull_request()` at the end of the conversion :) feel free to open a PR if you'd like",
"#self-assign"
] |
3,005,107,536 | 7,526 | Faster downloads/uploads with Xet storage | open | 
## Xet is out !
Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface... | true | 2025-04-18T14:46:42Z | 2025-05-12T12:09:09Z | null | lhoestq | MEMBER | null | null | 0 | 5 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7526 | false | [] |
3,003,032,248 | 7,525 | Fix indexing in split commit messages | closed | When a large commit is split up, it seems the commit index in the message is zero-based while the total number is one-based. I came across this running `convert-to-parquet` and was wondering why there was no `6-of-6` commit. This PR fixes that by adding one to the commit index, so both are one-based.
Current behavio... | true | 2025-04-17T17:06:26Z | 2025-04-28T14:26:27Z | 2025-04-28T14:26:27Z | klamike | NONE | https://github.com/huggingface/datasets/pull/7525 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7525 | true | [
"Hi ! this is expected and is coherent with other naming conventions in `datasets` such as parquet shards naming"
] |
3,002,067,826 | 7,524 | correct use with polars example | closed | true | 2025-04-17T10:19:19Z | 2025-04-28T13:48:34Z | 2025-04-28T13:48:33Z | SiQube | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7524 | 2025-04-28T13:48:33Z | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7524 | true | [] | |
2,999,616,692 | 7,523 | mention av in video docs | closed | true | 2025-04-16T13:11:12Z | 2025-04-16T13:13:45Z | 2025-04-16T13:11:42Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7523 | 2025-04-16T13:11:42Z | 1 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7523 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7523). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,998,169,017 | 7,522 | Preserve formatting in concatenated IterableDataset | closed | Fixes #7515 | true | 2025-04-16T02:37:33Z | 2025-05-19T15:07:38Z | 2025-05-19T15:07:37Z | francescorubbo | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7522 | 2025-05-19T15:07:37Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7522 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7522). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,997,666,366 | 7,521 | fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517) | closed | ## Task
Support bytes-like objects (bytes and bytearray) in Features classes
### Description
The `Features` classes only accept `bytes` objects for binary data, but not `bytearray`. This leads to errors when using `IterableDataset.from_spark()` with Spark DataFrames as they contain `bytearray` objects, even though... | true | 2025-04-15T21:23:58Z | 2025-05-07T14:17:29Z | 2025-05-07T14:17:29Z | giraffacarp | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7521 | 2025-05-07T14:17:29Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7521 | true | [
"@lhoestq let me know if you prefer to change the spark iterator so it outputs `bytes`",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7521). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
... |
2,997,422,044 | 7,520 | Update items in the dataset without `map` | open | ### Feature request
I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue.
If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th... | true | 2025-04-15T19:39:01Z | 2025-04-19T18:47:46Z | null | mashdragon | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7520 | false | [
"Hello!\n\nHave you looked at `Dataset.shard`? [Docs](https://huggingface.co/docs/datasets/en/process#shard)\n\nUsing this method you could break your dataset in N shards. Apply `map` on each shard and concatenate them back."
] |
2,996,458,961 | 7,519 | pdf docs fixes | closed | close https://github.com/huggingface/datasets/issues/7494 | true | 2025-04-15T13:35:56Z | 2025-04-15T13:38:31Z | 2025-04-15T13:36:03Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7519 | 2025-04-15T13:36:03Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7519 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7519). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,996,141,825 | 7,518 | num_proc parallelization works only for first ~10s. | open | ### Describe the bug
When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores.
### Steps to reproduce the bug
```
// do... | true | 2025-04-15T11:44:03Z | 2025-04-15T13:12:13Z | null | pshishodiaa | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7518 | false | [
"Hi, can you check if the processes are still alive ? It's a bit weird because `datasets` does check if processes crash and return an error in that case",
"Thank you for reverting quickly. I digged a bit, and realized my disk's IOPS is also limited - which is causing this. will check further and report if it's an... |
2,996,106,077 | 7,517 | Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames | closed | ### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a col... | true | 2025-04-15T11:29:17Z | 2025-05-07T14:17:30Z | 2025-05-07T14:17:30Z | giraffacarp | CONTRIBUTOR | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7517 | false | [
"Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?",
"Hi @lhoestq, \nconverting to bytes is certainly possible and would work... |
2,995,780,283 | 7,516 | unsloth/DeepSeek-R1-Distill-Qwen-32B server error | closed | ### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ... | true | 2025-04-15T09:26:53Z | 2025-04-15T09:57:26Z | 2025-04-15T09:57:26Z | Editor-1 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7516 | false | [] |
2,995,082,418 | 7,515 | `concatenate_datasets` does not preserve Pytorch format for IterableDataset | closed | ### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con... | true | 2025-04-15T04:36:34Z | 2025-05-19T15:07:38Z | 2025-05-19T15:07:38Z | francescorubbo | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7515 | false | [
"Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380",
"Thank you for the poin... |
2,994,714,923 | 7,514 | Do not hash `generator` in `BuilderConfig.create_config_id` | closed | `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, and hashing a `generator` can take a large amount of time or even cause MemoryError if the dataset processed in a ... | true | 2025-04-15T01:26:43Z | 2025-04-23T11:55:55Z | 2025-04-15T16:27:51Z | simonreise | NONE | https://github.com/huggingface/datasets/pull/7514 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7514 | true | [] |
2,994,678,437 | 7,513 | MemoryError while creating dataset from generator | open | ### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr... | true | 2025-04-15T01:02:02Z | 2025-04-23T19:37:08Z | null | simonreise | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7513 | false | [
"Upd: created a PR that can probably solve the problem: #7514",
"Hi ! We need to take the generator into account for the cache. The generator is hashed to make the dataset fingerprint used by the cache. This way you can reload the Dataset from the cache without regenerating in subsequent `from_generator` calls.\n... |
2,994,043,544 | 7,512 | .map() fails if function uses pyvista | open | ### Describe the bug
Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ... | true | 2025-04-14T19:43:02Z | 2025-04-14T20:01:53Z | null | el-hult | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7512 | false | [
"I found a similar (?) issue in https://github.com/huggingface/datasets/issues/6435, where someone had issues with forks and CUDA. According to https://huggingface.co/docs/datasets/main/en/process#multiprocessing we should do \n\n```\nfrom multiprocess import set_start_method\nset_start_method(\"spawn\")\n```\n\nto... |
2,992,131,117 | 7,510 | Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0 | open | ### Describe the bug
Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9.
Could you please take a look into it and make it compatible?
### Steps to reproduce the bug
1. Install setuptools >= 2.18.0
2. Install dill >=0.3.9
3. Run pip check
4. Output:
ERROR: pip's dependenc... | true | 2025-04-14T07:22:44Z | 2025-05-19T14:54:04Z | null | JGrel | NONE | null | null | 6 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7510 | false | [
"Hi ! We can bump `dill` to 0.3.9 if we make sure it's deterministic and doesn't break the caching mechanism in `datasets`.\n\nWould you be interested in opening a PR ? Then we can run the CI to see if it works",
"Hi!. Yeah I can do it. Should I make any changes besides dill versions?",
"There are probably some... |
2,991,484,542 | 7,509 | Dataset uses excessive memory when loading files | open | ### Describe the bug
Hi
I am having an issue when loading a dataset.
I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints.
I am trying to load the dataset using `load_dataset`.
The dataset is about 1.5M samples
I use `num_proc=32` and a node with 378GB of... | true | 2025-04-13T21:09:49Z | 2025-04-28T15:18:55Z | null | avishaiElmakies | NONE | null | null | 12 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7509 | false | [
"small update: I converted the jsons to parquet and it now works well with 32 proc and the same node. \nI still think this needs to be understood, since json is a very popular and easy-to-use format. ",
"Hi ! The JSON loader loads full files in memory, unless they are JSON Lines. In this case it iterates on the J... |
2,986,612,934 | 7,508 | Iterating over Image feature columns is extremely slow | open | We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow.
What I have found:
1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe... | true | 2025-04-10T19:00:54Z | 2025-04-15T17:57:08Z | null | sohamparikh | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7508 | false | [
"Hi ! Could it be because the `Image()` type in dataset does `image = Image.open(image_path)` and also `image.load()` which actually loads the image data in memory ? This is needed to avoid too many open files issues, see https://github.com/huggingface/datasets/issues/3985",
"Yes, that seems to be it. For my pur... |
2,984,309,806 | 7,507 | Front-end statistical data quantity deviation | open | ### Describe the bug
While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne... | true | 2025-04-10T02:51:38Z | 2025-04-15T12:54:51Z | null | rangehow | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7507 | false | [
"Hi ! the format of this dataset is not supported by the Dataset Viewer. It looks like this dataset was saved using `save_to_disk()` which is meant for local storage / easy reload without compression, not for sharing online."
] |
2,981,687,450 | 7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | open | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | true | 2025-04-09T06:32:04Z | 2025-04-15T13:04:31Z | null | calvintanama | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7506 | false | [
"Hi ! make sure to be logged in with your HF account (e.g. using `huggingface-cli login` or passing `token=` to `load_dataset()`), otherwise you'll get rate limited at one point"
] |
2,979,926,156 | 7,505 | HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy | open | I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code:
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
from datasets import load_dataset, Data... | true | 2025-04-08T14:08:40Z | 2025-04-08T14:08:40Z | null | hissain | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7505 | false | [] |
2,979,410,641 | 7,504 | BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. | open | ### Describe the bug
Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)):
```
! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \
--pretrained_model_name_or_path=${MODEL_ID} \
--dataset_name=${DATASET_NAME... | true | 2025-04-08T10:55:03Z | 2025-04-15T12:36:28Z | null | tteguayco | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7504 | false | [
"I encountered the same error, have you resolved it?",
"Hi ! `use_auth_token` has been deprecated and removed some time ago. You should use `token` instead in `load_dataset()`"
] |
2,978,512,625 | 7,503 | Inconsistency between load_dataset and load_from_disk functionality | open | ## Issue Description
I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path:
```python
import datasets
ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main')
```
output:
```t... | true | 2025-04-08T03:46:22Z | 2025-04-15T12:39:53Z | null | zzzzzec | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7503 | false | [
"Hi ! you can find more info here: https://github.com/huggingface/datasets/issues/5044#issuecomment-1263714347\n\n> What's the recommended approach for this use case? Should I manually process my gsm8k-new dataset to make it compatible with load_dataset? Is there a standard way to convert between these formats?\n\n... |
2,977,453,814 | 7,502 | `load_dataset` of size 40GB creates a cache of >720GB | closed | Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
... | true | 2025-04-07T16:52:34Z | 2025-04-15T15:22:12Z | 2025-04-15T15:22:11Z | pietrolesci | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7502 | false | [
"Hi ! Parquet is a compressed format. When you load a dataset, it uncompresses the Parquet data into Arrow data on your disk. That's why you can indeed end up with 720GB of uncompressed data on disk. The uncompression is needed to enable performant dataset objects (especially for random access).\n\nTo save some sto... |
2,976,721,014 | 7,501 | Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct | closed | ### Describe the bug
`datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`.
### Steps to reproduce the bug
```json
// test.json
{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}
{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}
```
```python
import json
from datasets i... | true | 2025-04-07T12:35:39Z | 2025-04-07T12:43:04Z | 2025-04-07T12:43:03Z | yaner-here | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7501 | false | [
"Solved by the default `load_dataset(features)` parameters. Do not use `Sequence` for the `list` in `list[any]` json schema, just simply use `[]`. For example, `\"b\": Sequence({...})` fails but `\"b\": [{...}]` works fine."
] |
2,974,841,921 | 7,500 | Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class | open | ### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be g... | true | 2025-04-06T09:56:09Z | 2025-04-15T12:57:39Z | null | benglewis | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7500 | false | [
"Does the torch `DataLoader` really require the dataset to be a subclass of `torch.utils.data.Dataset` ? Or is there a simpler type we could use ?\n\nPS: also note that a dataset without `with_format()` can also be used in a torch `DataLoader` . Calling `with_format(\"torch\")` simply makes the output of the datase... |
2,973,489,126 | 7,499 | Added cache dirs to load and file_utils | closed | When adding "cache_dir" to datasets.load_dataset, the cache_dir gets lost in the function calls, changing the cache dir to the default path. This fixes a few of these instances. | true | 2025-04-04T22:36:04Z | 2025-05-07T14:07:34Z | 2025-05-07T14:07:34Z | gmongaras | NONE | https://github.com/huggingface/datasets/pull/7499 | null | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7499 | true | [
"hi ! the `hf_hub_download` cache_dir is a different cache directory than the one for `datasets`.\r\n\r\n`hf_hub_download` uses the `huggingface_hub` cache which is located in by default in `~/.cache/huggingface/hub`, while `datasets` uses a different cache for Arrow files and map() results `~/.cache/huggingface/da... |
2,969,218,273 | 7,498 | Extreme memory bandwidth. | open | ### Describe the bug
When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s.
However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker).
It seems like the workers don't share memory and b... | true | 2025-04-03T11:09:08Z | 2025-04-03T11:11:22Z | null | J0SZ | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7498 | false | [] |
2,968,553,693 | 7,497 | How to convert videos to images? | open | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi... | true | 2025-04-03T07:08:39Z | 2025-04-15T12:35:15Z | null | Loki-Lu | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7497 | false | [
"Hi ! there is some documentation here on how to read video frames: https://huggingface.co/docs/datasets/video_load"
] |
2,967,345,522 | 7,496 | Json builder: Allow features to override problematic Arrow types | open | ### Feature request
In the JSON builder, use explicitly requested feature types before or while converting to Arrow.
### Motivation
Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic colum... | true | 2025-04-02T19:27:16Z | 2025-04-15T13:06:09Z | null | edmcman | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7496 | false | [
"Hi ! It would be cool indeed, currently the JSON data are generally loaded here: \n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/packaged_modules/json/json.py#L137-L140\n\nMaybe we can pass a Arrow `schema` to avoid errors ?"
] |
2,967,034,060 | 7,495 | Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0 | open | ### Describe the bug
I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset.
Interestingly, the dataset viewer still shows the correct columns
### Steps to reproduce ... | true | 2025-04-02T17:01:11Z | 2025-05-19T13:54:16Z | null | bruno-hays | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7495 | false | [
"Hi, the dataset viewer shows all the possible columns and their types, but `load_dataset()` iterates through all the columns that you defined. It seems that you only have one column (‘audio’) defined in your dataset because when I ran `print(ds.column_names)`, the only name I got was “audio”. You need to clearly d... |
2,965,347,685 | 7,494 | Broken links in pdf loading documentation | closed | ### Describe the bug
Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load):
1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.... | true | 2025-04-02T06:45:22Z | 2025-04-15T13:36:25Z | 2025-04-15T13:36:04Z | VyoJ | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7494 | false | [
"thanks for reporting ! I fixed the links, the docs will be updated in the next release"
] |
2,964,025,179 | 7,493 | push_to_hub does not upload videos | open | ### Describe the bug
Hello,
I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows.
I created a dataset locally and it references the videos and the video readers can read them correctly. I u... | true | 2025-04-01T17:00:20Z | 2025-04-15T12:34:23Z | null | DominikVincent | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7493 | false | [
"Hi ! the `Video` type is still experimental, and in particular `push_to_hub` doesn't upload videos at the moment (only the paths).\n\nThere is an open question to either upload the videos inside the Parquet files, or rather have them as separate files (which is great to enable remote seeking/streaming)"
] |
2,959,088,568 | 7,492 | Closes #7457 | closed | This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasets—similar to HF_HUB_CACHE for models. | true | 2025-03-30T20:41:20Z | 2025-04-13T22:05:07Z | 2025-04-13T22:05:07Z | Harry-Yang0518 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7492 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7492 | true | [
"This PR fixes issue #7457"
] |
2,959,085,647 | 7,491 | docs: update cache.mdx to include HF_DATASETS_CACHE documentation | closed | true | 2025-03-30T20:35:03Z | 2025-03-30T20:36:40Z | 2025-03-30T20:36:40Z | Harry-Yang0518 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7491 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7491 | true | [
"Already included HF_DATASETS_CACHE"
] | |
2,958,826,222 | 7,490 | (refactor) remove redundant logic in _check_valid_index_key | open | This PR contributes a minor refactor, in a small function in `src/datasets/formatting/formatting.py`. No change in logic.
In the original code, there are separate if-conditionals for `isinstance(key, range)` and `isinstance(key, Iterable)`, with essentially the same logic.
This PR combines these two using a sin... | true | 2025-03-30T11:45:42Z | 2025-03-30T11:50:22Z | null | suzyahyah | NONE | https://github.com/huggingface/datasets/pull/7490 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7490 | true | [] |
2,958,204,763 | 7,489 | fix: loading of datasets from Disk(#7373) | open | Fixes dataset loading from disk by ensuring that memory maps and streams are properly closed.
For more details, see https://github.com/huggingface/datasets/issues/7373. | true | 2025-03-29T16:22:58Z | 2025-04-24T16:36:36Z | null | sam-hey | NONE | https://github.com/huggingface/datasets/pull/7489 | null | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7489 | true | [
"@nepfaff Could you confirm if this fixes the issue for you? I checked Memray, and everything looked good on my end.\r\n\r\nInstall: `pip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets`\r\n",
"Will aim to get to this soon. I don't have a rapid testing pipeline setup but need to wait ... |
2,956,559,358 | 7,488 | Support underscore int read instruction | closed | close https://github.com/huggingface/datasets/issues/7481 | true | 2025-03-28T16:01:15Z | 2025-03-28T16:20:44Z | 2025-03-28T16:20:43Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7488 | 2025-03-28T16:20:43Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7488 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7488). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"you rock, Quentin - thank you!"
] |
2,956,533,448 | 7,487 | Write pdf in map | closed | Fix this error when mapping a PDF dataset
```
pyarrow.lib.ArrowInvalid: Could not convert <pdfplumber.pdf.PDF object at 0x13498ee40> with type PDF: did not recognize Python value type when inferring an Arrow data type
```
and also let map() outputs be lists of images or pdfs | true | 2025-03-28T15:49:25Z | 2025-03-28T17:09:53Z | 2025-03-28T17:09:51Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7487 | 2025-03-28T17:09:51Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7487 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,954,042,179 | 7,486 | `shared_datadir` fixture is missing | closed | ### Describe the bug
Running the tests for the latest release fails due to missing `shared_datadir` fixture.
### Steps to reproduce the bug
Running `pytest` while building a package for Arch Linux leads to these errors:
```
==================================== ERRORS ====================================
_________ E... | true | 2025-03-27T18:17:12Z | 2025-03-27T19:49:11Z | 2025-03-27T19:49:10Z | lahwaacz | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7486 | false | [
"OK I was missing the `pytest-datadir` package. Sorry for the noise!"
] |
2,953,696,519 | 7,485 | set dev version | closed | true | 2025-03-27T16:39:34Z | 2025-03-27T16:41:59Z | 2025-03-27T16:39:42Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7485 | 2025-03-27T16:39:42Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7485 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,953,677,168 | 7,484 | release: 3.5.0 | closed | true | 2025-03-27T16:33:27Z | 2025-03-27T16:35:44Z | 2025-03-27T16:34:22Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7484 | 2025-03-27T16:34:22Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7484 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,951,856,468 | 7,483 | Support skip_trying_type | closed | This PR addresses Issue #7472
cc: @lhoestq | true | 2025-03-27T07:07:20Z | 2025-04-29T04:14:57Z | 2025-04-09T09:53:10Z | yoshitomo-matsubara | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7483 | 2025-04-09T09:53:10Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7483 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! Can you run `make style` to fix code formatting ?\r\n\r\nI was also thinking of ... |
2,950,890,368 | 7,482 | Implement capability to restore non-nullability in Features | closed | This PR attempts to keep track of non_nullable pyarrow fields when converting a `pa.Schema` to `Features`. At the same time, when outputting the `arrow_schema`, the original non-nullable fields are restored. This allows for more consistent behavior and avoids breaking behavior as illustrated in #7479.
I am by no mea... | true | 2025-03-26T22:16:09Z | 2025-05-15T15:00:59Z | 2025-05-15T15:00:59Z | BramVanroy | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7482 | null | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7482 | true | [
"Interestingly, this does not close #7479. The Features are not correctly maintained when calling `from_dict` with the custom Features.",
"Unfortunately this PR does not fix the reported issue. After more digging:\r\n\r\n- when the dataset is created, nullability information is lost in Features;\r\n- even with th... |
2,950,692,971 | 7,481 | deal with python `10_000` legal number in slice syntax | closed | ### Feature request
```
In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]")
In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]")
[dozens of frames skipped]
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _s... | true | 2025-03-26T20:10:54Z | 2025-03-28T16:20:44Z | 2025-03-28T16:20:44Z | sfc-gh-sbekman | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7481 | false | [
"should be an easy fix, I opened a PR"
] |
2,950,315,214 | 7,480 | HF_DATASETS_CACHE ignored? | open | ### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process... | true | 2025-03-26T17:19:34Z | 2025-04-28T10:16:16Z | null | stephenroller | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7480 | false | [
"FWIW, it does eventually write to /tmp/roller/datasets when generating the final version.",
"Hey, I’d love to work on this issue but I am a beginner, can I work it with you?",
"Hi @lhoestq,\nI'd like to look into this issue but I'm still learning. Could you share any quick pointers on the HF_DATASETS_CACHE beh... |
2,950,235,396 | 7,479 | Features.from_arrow_schema is destructive | open | ### Describe the bug
I came across this, perhaps niche, bug where `Features` does not/cannot account for pyarrow's `nullable=False` option in Fields. Interestingly, I found that in regular "flat" fields this does not necessarily lead to conflicts, but when a non-nullable field is in a struct, an incompatibility arises... | true | 2025-03-26T16:46:43Z | 2025-03-26T16:46:58Z | null | BramVanroy | CONTRIBUTOR | null | null | 0 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7479 | false | [] |
2,948,993,461 | 7,478 | update fsspec 2025.3.0 | closed | It appears there have been two releases of fsspec since this dependency was last updated, it would be great if Datasets could be updated so that it didn't hold back the usage of newer fsspec versions in consuming projects.
PR based on https://github.com/huggingface/datasets/pull/7352 | true | 2025-03-26T09:53:05Z | 2025-03-28T19:15:54Z | 2025-03-28T15:51:55Z | peteski22 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7478 | 2025-03-28T15:51:54Z | 2 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7478 | true | [
"Sorry for tagging you @lhoestq but since you merged the linked PR, I wondered if you might be able to help me get this triaged so it can be reviewed/rejected etc. 🙏🏼 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7478). All of your documentation changes will be reflec... |
2,947,169,460 | 7,477 | What is the canonical way to compress a Dataset? | open | Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https:... | true | 2025-03-25T16:47:51Z | 2025-04-03T09:13:11Z | null | eric-czech | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7477 | false | [
"I saw this post by @lhoestq: https://discuss.huggingface.co/t/increased-arrow-table-size-by-factor-of-2/26561/4 suggesting that there is at least some internal code for writing sharded parquet datasets non-concurrently. This appears to be that code: https://github.com/huggingface/datasets/blob/94ccd1b4fada8a92cea... |
2,946,997,924 | 7,476 | Priotitize json | closed | `datasets` should load the JSON data in https://huggingface.co/datasets/facebook/natural_reasoning, not the PDF | true | 2025-03-25T15:44:31Z | 2025-03-25T15:47:00Z | 2025-03-25T15:45:00Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7476 | 2025-03-25T15:45:00Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7476 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,946,640,570 | 7,475 | IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard | closed | ### Describe the bug
I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard.
### Steps to reproduce the bug
I am reusing the example from the doc
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(6)}).to_... | true | 2025-03-25T13:58:07Z | 2025-05-06T14:22:19Z | 2025-05-06T14:05:07Z | bruno-hays | CONTRIBUTOR | null | null | 8 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7475 | false | [
"Hey, I’d love to work on this issue but I am a beginner, can I work it with you?",
"Hello. I'm sorry but I don't have much time to get in the details for now.\nHave you managed to reproduce the issue with the code provided ?\nIf you want to work on it, you can self-assign and ask @lhoestq for directions",
"Hi ... |
2,945,066,258 | 7,474 | Remove conditions for Python < 3.9 | closed | This PR remove conditions for Python < 3.9. | true | 2025-03-25T03:08:04Z | 2025-04-16T00:11:06Z | 2025-04-15T16:07:55Z | cyyever | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7474 | 2025-04-15T16:07:54Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7474 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks ! can you run `make style` to fix code formatting ? then we can merge",
"@lhoe... |
2,939,034,643 | 7,473 | Webdataset data format problem | closed | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted ... | true | 2025-03-21T17:23:52Z | 2025-03-21T19:19:58Z | 2025-03-21T19:19:58Z | edmcman | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7473 | false | [
"I was able to work around it"
] |
2,937,607,272 | 7,472 | Label casting during `map` process is canceled after the `map` process | closed | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithL... | true | 2025-03-21T07:56:22Z | 2025-04-10T05:11:15Z | 2025-04-10T05:11:14Z | yoshitomo-matsubara | CONTRIBUTOR | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7472 | false | [
"Hi ! By default `map()` tries to keep the types of each column of the dataset, so here it reuses the int type since all your float values can be converted to integers. But I agree it would be nice to store float values as float values and don't try to reuse the same type in this case.\n\nIn the meantime, you can e... |
2,937,530,069 | 7,471 | Adding argument to `_get_data_files_patterns` | closed | ### Feature request
How about adding if the user already know about the pattern?
https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252
### Motivation
While using this load_dataset people might use 10M of images for the local files.
However, due to sear... | true | 2025-03-21T07:17:53Z | 2025-03-27T12:30:52Z | 2025-03-26T07:26:27Z | SangbumChoi | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7471 | false | [
"Hi ! The pattern can be specified in advance in YAML in the README.md of the dataset :)\n\nFor example\n\n```\n---\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: \"train/*\"\n - split: test\n path: \"test/*\"\n---\n```\n\nSee the docs at https://huggingface.co/docs/hub/en/dataset... |
2,937,236,323 | 7,470 | Is it possible to shard a single-sharded IterableDataset? | closed | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs mo... | true | 2025-03-21T04:33:37Z | 2025-05-09T22:51:46Z | 2025-03-26T06:49:28Z | jonathanasdf | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7470 | false | [
"Hi ! Maybe you can look for an option in your dataset to partition your data based on a deterministic filter ? For example each worker could stream the data based on `row.id % num_shards` or something like that ?",
"So the recommendation is to start out with multiple shards initially and re-sharding after is not... |
2,936,606,080 | 7,469 | Custom split name with the web interface | closed | ### Describe the bug
According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name
it should infer the split name from the subdir of data or the beg of the name of the files in data.
When doing this manually through web upload it does not work. it uses "train" as a unique spl... | true | 2025-03-20T20:45:59Z | 2025-03-21T07:20:37Z | 2025-03-21T07:20:37Z | vince62s | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7469 | false | [] |
2,934,094,103 | 7,468 | function `load_dataset` can't solve folder path with regex characters like "[]" | open | ### Describe the bug
When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular e... | true | 2025-03-20T05:21:59Z | 2025-03-25T10:18:12Z | null | Hpeox | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7468 | false | [
"Hi ! Have you tried escaping the glob special characters `[` and `]` ?\n\nbtw note that`AbstractFileSystem.glob` doesn't support regex, instead it supports glob patterns as in the python library [glob](https://docs.python.org/3/library/glob.html)\n"
] |
2,930,067,107 | 7,467 | load_dataset with streaming hangs on parquet datasets | open | ### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming... | true | 2025-03-18T23:33:54Z | 2025-03-25T10:28:04Z | null | The0nix | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7467 | false | [
"Hi ! The issue comes from `pyarrow`, I reported it here: https://github.com/apache/arrow/issues/45214 (feel free to comment / thumb up).\n\nAlternatively we can try to find something else than `ParquetFileFragment.to_batches()` to iterate on Parquet data and keep the option the pass `filters=`..."
] |
2,928,661,327 | 7,466 | Fix local pdf loading | closed | fir this error when accessing a local pdf
```
File ~/.pyenv/versions/3.12.2/envs/hf-datasets/lib/python3.12/site-packages/pdfminer/psparser.py:220, in PSBaseParser.seek(self, pos)
218 """Seeks the parser to the given position."""
219 log.debug("seek: %r", pos)
--> 220 self.fp.seek(pos)
221 # reset t... | true | 2025-03-18T14:09:06Z | 2025-03-18T14:11:52Z | 2025-03-18T14:09:21Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7466 | 2025-03-18T14:09:21Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7466 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7466). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,926,478,838 | 7,464 | Minor fix for metadata files in extension counter | closed | true | 2025-03-17T21:57:11Z | 2025-03-18T15:21:43Z | 2025-03-18T15:21:41Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7464 | 2025-03-18T15:21:41Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7464 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,925,924,452 | 7,463 | Adds EXR format to store depth images in float32 | open | This PR adds the EXR feature to store depth images (or can be normals, etc) in float32.
It relies on [openexr_numpy](https://github.com/martinResearch/openexr_numpy/tree/main) to manipulate EXR images. | true | 2025-03-17T17:42:40Z | 2025-04-02T12:33:39Z | null | ducha-aiki | NONE | https://github.com/huggingface/datasets/pull/7463 | null | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7463 | true | [
"Hi ! I'mn wondering if this shouldn't this be an `Image()` type and decoded as a `PIL.Image` ?\r\n\r\nThis would make it easier to integrate with the rest of the HF ecosystem, and you could still get a numpy array using `ds = ds.with_format(\"numpy\")` which sets all the images to be formatted as numpy arrays",
... |
2,925,612,945 | 7,462 | set dev version | closed | true | 2025-03-17T16:00:53Z | 2025-03-17T16:03:31Z | 2025-03-17T16:01:08Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7462 | 2025-03-17T16:01:08Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7462 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,925,608,123 | 7,461 | List of images behave differently on IterableDataset and Dataset | closed | ### Describe the bug
This code:
```python
def train_iterable_gen():
images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128)))
yield {
"images": np.expand_dims(images, axis=0),
"messages": [
... | true | 2025-03-17T15:59:23Z | 2025-03-18T08:57:17Z | 2025-03-18T08:57:16Z | FredrikNoren | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7461 | false | [
"Hi ! Can you try with `datasets` ^3.4 released recently ? on my side it works with IterableDataset on the recent version :)\n\n```python\nIn [20]: def train_iterable_gen():\n ...: images = np.array(load_image(\"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\")... |
2,925,605,865 | 7,460 | release: 3.4.1 | closed | true | 2025-03-17T15:58:31Z | 2025-03-17T16:01:14Z | 2025-03-17T15:59:19Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7460 | 2025-03-17T15:59:19Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7460 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7460). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,925,491,766 | 7,459 | Fix data_files filtering | closed | close https://github.com/huggingface/datasets/issues/7458 | true | 2025-03-17T15:20:21Z | 2025-03-17T15:25:56Z | 2025-03-17T15:25:54Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7459 | 2025-03-17T15:25:53Z | 1 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7459 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7459). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,925,403,528 | 7,458 | Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0 | closed | ### Describe the bug
Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2.
### Steps to reproduce the bug
Steps to reproduce:
```
pip install datastes==3.4.0
python -c "from datasets import load_dataset; load_dataset('l... | true | 2025-03-17T14:54:02Z | 2025-03-17T16:02:04Z | 2025-03-17T15:25:55Z | nikita-savelyevv | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7458 | false | [
"thanks for reporting, I released 3.4.1 with a fix"
] |
2,924,886,467 | 7,457 | Document the HF_DATASETS_CACHE env variable | closed | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`... | true | 2025-03-17T12:24:50Z | 2025-05-06T15:54:39Z | 2025-05-06T15:54:39Z | LSerranoPEReN | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7457 | false | [
"Strongly agree to this, in addition, I am also suffering to change the cache location similar to other issues (since I changed the environmental variables).\nhttps://github.com/huggingface/datasets/issues/6886",
"`HF_DATASETS_CACHE` should be documented there indeed, feel free to open a PR :) ",
"Hey, I’d love... |
2,922,676,278 | 7,456 | .add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab | open | ### Describe the bug
At Google Colab
```!pip install faiss-cpu``` works
```import faiss``` no error
but
```embeddings_dataset.add_faiss_index(column='embeddings')```
returns
```
[/usr/local/lib/python3.11/dist-packages/datasets/search.py](https://localhost:8080/#) in init(self, device, string_factory, metric_type, cus... | true | 2025-03-16T00:51:49Z | 2025-03-17T15:57:19Z | null | MapleBloom | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7456 | false | [
"I can fix this.\nIt's mainly because faiss-gpu requires python<=3.10 but the default python version in colab is 3.11. We just have to downgrade the CPython version down to 3.10 and it should work fine.\n",
"I think I just had no chance to meet with faiss-cpu.\nIt could be import problem? \n_has_faiss gets its va... |
2,921,933,250 | 7,455 | Problems with local dataset after upgrade from 3.3.2 to 3.4.0 | open | ### Describe the bug
I was not able to open a local saved dataset anymore that was created using an older datasets version after the upgrade yesterday from datasets 3.3.2 to 3.4.0
The traceback is
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/... | true | 2025-03-15T09:22:50Z | 2025-03-17T16:20:43Z | null | andjoer | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7455 | false | [
"Hi ! I just released 3.4.1 with a fix, let me know if it's working now !"
] |
2,920,760,793 | 7,454 | set dev version | closed | true | 2025-03-14T16:48:19Z | 2025-03-14T16:50:31Z | 2025-03-14T16:48:28Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7454 | 2025-03-14T16:48:28Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7454 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7454). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,920,719,503 | 7,453 | release: 3.4.0 | closed | true | 2025-03-14T16:30:45Z | 2025-03-14T16:38:10Z | 2025-03-14T16:38:08Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7453 | 2025-03-14T16:38:08Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7453 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,920,354,783 | 7,452 | minor docs changes | closed | before the release | true | 2025-03-14T14:14:04Z | 2025-03-14T14:16:38Z | 2025-03-14T14:14:20Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7452 | 2025-03-14T14:14:20Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7452 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7452). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,919,835,663 | 7,451 | Fix resuming after `ds.set_epoch(new_epoch)` | closed | close https://github.com/huggingface/datasets/issues/7447 | true | 2025-03-14T10:31:25Z | 2025-03-14T10:50:11Z | 2025-03-14T10:50:09Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7451 | 2025-03-14T10:50:09Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7451 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7451). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,916,681,414 | 7,450 | Add IterableDataset.decode with multithreading | closed | Useful for dataset streaming for multimodal datasets, and especially for lerobot.
It speeds up streaming up to 20 times.
When decoding is enabled (default), media types are decoded:
* audio -> dict of "array" and "sampling_rate" and "path"
* image -> PIL.Image
* video -> torchvision.io.VideoReader
You can e... | true | 2025-03-13T10:41:35Z | 2025-03-14T10:35:37Z | 2025-03-14T10:35:35Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7450 | 2025-03-14T10:35:35Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7450 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,916,235,092 | 7,449 | Cannot load data with different schemas from different parquet files | closed | ### Describe the bug
Cannot load samples with optional fields from different files. The schema cannot be correctly derived.
### Steps to reproduce the bug
When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`.
```python
import pandas as ... | true | 2025-03-13T08:14:49Z | 2025-03-17T07:27:48Z | 2025-03-17T07:27:46Z | li-plus | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7449 | false | [
"Hi ! `load_dataset` expects all the data_files to have the same schema.\n\nMaybe you can try enforcing certain `features` using:\n\n```python\nfeatures = Features({\"conversations\": {'content': Value('string'), 'role': Value('string',)}})\nds = load_dataset(..., features=features)\n```",
"Thanks! It works if I ... |
2,916,025,762 | 7,448 | `datasets.disable_caching` doesn't work | open | When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function.
I tried `datasets.disable_caching`, but it doesn't work! | true | 2025-03-13T06:40:12Z | 2025-03-22T04:37:07Z | null | UCC-team | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7448 | false | [
"cc",
"Yes I have the same issue. It's a confusingly named function. See [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L115-L130)\n\n```\n...\nIf disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely... |
2,915,233,248 | 7,447 | Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True) | closed | ### Describe the bug
When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches ... | true | 2025-03-12T21:41:05Z | 2025-03-14T17:26:59Z | 2025-03-14T10:50:10Z | dhruvdcoder | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7447 | false | [
"Thanks for reporting ! Maybe we should store the epoch in the state_dict, and then when the dataset is iterated on again after setting a new epoch it should restart from scratch instead of resuming ? wdyt ?",
"But why does this only happen when `persistent_workers=True`? I would expect it to work correctly even ... |
2,913,050,552 | 7,446 | pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int' | open | ### Describe the bug
A dict with its keys are all str but get following error
```python
test_data=[{'input_ids':[1,2,3],'labels':[[Counter({2:1})]]}]
dataset = datasets.Dataset.from_list(test_data)
```
```bash
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
```
### Steps to reproduce the... | true | 2025-03-12T07:48:37Z | 2025-03-12T07:48:37Z | null | rangehow | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7446 | false | [] |
2,911,507,923 | 7,445 | Fix small bugs with async map | closed | helpful for the next PR to enable parallel image/audio/video decoding and make multimodal datasets go brr (e.g. for lerobot)
- fix with_indices
- fix resuming with save_state_dict() / load_state_dict() - omg that wasn't easy
- remove unnecessary decoding in map() to enable parallelism in FormattedExampleIterable l... | true | 2025-03-11T18:30:57Z | 2025-03-13T10:38:03Z | 2025-03-13T10:37:58Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7445 | 2025-03-13T10:37:58Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7445 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7445). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,911,202,445 | 7,444 | Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP. | open | ### Describe the bug
I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method.
However, when ... | true | 2025-03-11T16:34:39Z | 2025-05-13T09:41:03Z | null | dhruvdcoder | NONE | null | null | 1 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7444 | false | [
"I had a similar issue when loading the saved iterable dataset state to fast-forward to the mid-train location before resuming. This happened when I shuffled a concatenated dataset. A `iterable_data_state_dict.json` file was saved during checkpointing in the Trainer with:\n```\ndef _save_rng_state(self, output_dir)... |
2,908,585,656 | 7,443 | index error when num_shards > len(dataset) | open | In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`.
I frequently... | true | 2025-03-10T22:40:59Z | 2025-03-10T23:43:08Z | null | eminorhan | NONE | null | null | 1 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7443 | false | [
"Actually, looking at the code a bit more carefully, maybe an even better solution is to explicitly set `num_shards=len(self)` somewhere inside both `push_to_hub()` and `save_to_disk()` when these functions are invoked with `num_shards > len(dataset)`."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.