id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,743,437,260 | 7,335 | Too many open files: '/root/.cache/huggingface/token' | open | ### Describe the bug
I ran this code:
```
from datasets import load_dataset
dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000)
```
And got this error.
Before it was some other file though (lie something...incomplete)
runnting
```
ulimit -n 8192
... | true | 2024-12-16T21:30:24Z | 2024-12-16T21:30:24Z | null | kopyl | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7335 | false | [] |
2,740,266,503 | 7,334 | TypeError: Value.__init__() missing 1 required positional argument: 'dtype' | open | ### Describe the bug
ds = load_dataset(
"./xxx.py",
name="default",
split="train",
)
The datasets does not support debugging locally anymore...
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"./repo.py",
name="default",
split="train",
)
... | true | 2024-12-15T04:08:46Z | 2025-04-14T10:25:12Z | null | ghost | NONE | null | null | 2 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7334 | false | [
"same error \n```\ndata = load_dataset('/opt/deepseek_R1_finetune/hf_datasets/openai/gsm8k', 'main')[split] \n```",
"> same error\n> \n> ```\n> data = load_dataset('/opt/deepseek_R1_finetune/hf_datasets/openai/gsm8k', 'main')[split] \n> ```\n\nhttps://github.com/huggingface/open-r1/issues/204 this help me"
] |
2,738,626,593 | 7,328 | Fix typo in arrow_dataset | closed | true | 2024-12-13T15:17:09Z | 2024-12-19T17:10:27Z | 2024-12-19T17:10:25Z | AndreaFrancis | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7328 | 2024-12-19T17:10:25Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7328 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7328). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,738,514,909 | 7,327 | .map() is not caching and ram goes OOM | open | ### Describe the bug
Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here?
### Steps to... | true | 2024-12-13T14:22:56Z | 2025-02-10T10:42:38Z | null | simeneide | NONE | null | null | 1 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7327 | false | [
"I have the same issue - any update on this?"
] |
2,738,188,902 | 7,326 | Remove upper bound for fsspec | open | ### Describe the bug
As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`.
In our case this c... | true | 2024-12-13T11:35:12Z | 2025-01-03T15:34:37Z | null | fellhorn | NONE | null | null | 1 | 3 | 0 | 3 | null | false | [] | https://github.com/huggingface/datasets/issues/7326 | false | [
"Unfortunately `fsspec` versioning allows breaking changes across version and there is no way we can keep it without constrains at the moment. It already broke `datasets` once in the past. Maybe one day once `fsspec` decides on a stable and future proof API but I don't think this will happen anytime soon\r\n\r\nedi... |
2,736,618,054 | 7,325 | Introduce pdf support (#7318) | closed | First implementation of the Pdf feature to support pdfs (#7318) . Using [pdfplumber](https://github.com/jsvine/pdfplumber?tab=readme-ov-file#python-library) as the default library to work with pdfs.
@lhoestq and @AndreaFrancis | true | 2024-12-12T18:31:18Z | 2025-03-18T14:00:36Z | 2025-03-18T14:00:36Z | yabramuvdi | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7325 | 2025-03-18T14:00:36Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7325 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7325). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @AndreaFrancis and @lhoestq ! Thanks for looking at the code and for all the changes... |
2,736,008,698 | 7,323 | Unexpected cache behaviour using load_dataset | closed | ### Describe the bug
Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest v... | true | 2024-12-12T14:03:00Z | 2025-01-31T11:34:24Z | 2025-01-31T11:34:24Z | Moritz-Wirth | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7323 | false | [
"Hi ! Since `datasets` 3.x, the `datasets` specific files are in `cache_dir=` and the HF files are cached using `huggingface_hub` and you can set its cache directory using the `HF_HOME` environment variable.\r\n\r\nThey are independent, for example you can delete the Hub cache (containing downloaded files) but sti... |
2,732,254,868 | 7,322 | ArrowInvalid: JSON parse error: Column() changed from object to array in row 0 | open | ### Describe the bug
Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```.
### Steps to reproduce the bug
```
from datasets import load_dataset
fw =load_dataset("liuhaotian/LLaVA-Instruct-150K")
```
Error:
```
ArrowInvalid Traceback (most recen... | true | 2024-12-11T08:41:39Z | 2025-01-03T15:48:55Z | null | Polarisamoon | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7322 | false | [
"Hi ! `datasets` uses Arrow under the hood which expects each column and array to have fixed types that don't change across rows of a dataset, which is why we get this error. This dataset in particular doesn't have a format compatible with Arrow unfortunately. Don't hesitate to open a discussion or PR on HF to fix ... |
2,731,626,760 | 7,321 | ImportError: cannot import name 'set_caching_enabled' from 'datasets' | open | ### Describe the bug
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "... | true | 2024-12-11T01:58:46Z | 2024-12-11T13:32:15Z | null | sankexin | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7321 | false | [
"pip install datasets==2.18.0",
"Hi ! I think you need to update axolotl"
] |
2,731,112,100 | 7,320 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] | closed | ### Describe the bug
I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
Here is my code:
### St... | true | 2024-12-10T20:23:11Z | 2024-12-10T23:22:23Z | 2024-12-10T23:22:23Z | atrompeterog | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7320 | false | [
"Now i have other error"
] |
2,730,679,980 | 7,319 | set dev version | closed | true | 2024-12-10T17:01:34Z | 2024-12-10T17:04:04Z | 2024-12-10T17:01:45Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7319 | 2024-12-10T17:01:45Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7319 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7319). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,730,676,278 | 7,318 | Introduce support for PDFs | open | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"pat... | true | 2024-12-10T16:59:48Z | 2024-12-12T18:38:13Z | null | yabramuvdi | CONTRIBUTOR | null | null | 6 | 1 | 0 | 1 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7318 | false | [
"#self-assign",
"Awesome ! Let me know if you have any question or if I can help :)\r\n\r\ncc @AndreaFrancis as well for viz",
"Other candidates libraries for the Pdf type: PyMuPDF pypdf and pdfplumber\r\n\r\nEDIT: Pymupdf looks like a good choice when it comes to maturity + performance + versatility BUT the li... |
2,730,661,237 | 7,317 | Release: 3.2.0 | closed | true | 2024-12-10T16:53:20Z | 2024-12-10T16:56:58Z | 2024-12-10T16:56:56Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7317 | 2024-12-10T16:56:56Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7317 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7317). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,730,196,085 | 7,316 | More docs to from_dict to mention that the result lives in RAM | closed | following discussions at https://discuss.huggingface.co/t/how-to-load-this-simple-audio-data-set-and-use-dataset-map-without-memory-issues/17722/14 | true | 2024-12-10T13:56:01Z | 2024-12-10T13:58:32Z | 2024-12-10T13:57:02Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7316 | 2024-12-10T13:57:02Z | 1 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/7316 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7316). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,727,502,630 | 7,314 | Resolved for empty datafiles | open | Resolved for Issue#6152 | true | 2024-12-09T15:47:22Z | 2024-12-27T18:20:21Z | null | sahillihas | NONE | https://github.com/huggingface/datasets/pull/7314 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7314 | true | [
"Closes #6152 ",
"@mariosasko I hope this resolves #6152 "
] |
2,726,240,634 | 7,313 | Cannot create a dataset with relative audio path | open | ### Describe the bug
Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code).
### Steps to reproduce the bug
Creating a dataset
```
from pathlib import Path
from datasets import Dataset, load_datas... | true | 2024-12-09T07:34:20Z | 2025-04-19T07:13:08Z | null | sedol1339 | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7313 | false | [
"Hello ! when you `cast_column` you need the paths to be absolute paths or relative paths to your working directory, not the original dataset directory.\r\n\r\nThough I'd recommend structuring your dataset as an AudioFolder which automatically links a metadata.jsonl or csv to the audio files via relative paths **wi... |
2,725,103,094 | 7,312 | [Audio Features - DO NOT MERGE] PoC for adding an offset+sliced reading to audio file. | open | This is a proof of concept for #7310 . The idea is to enable the access to others column of the dataset row when loading an audio file into a table. This is to allow sliced reading. As stated in the issue, many people have very long audio files and use start and stop slicing in this audio file.
Right now, this code ... | true | 2024-12-08T10:27:31Z | 2024-12-08T10:27:31Z | null | TParcollet | NONE | https://github.com/huggingface/datasets/pull/7312 | null | 0 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/7312 | true | [] |
2,725,002,630 | 7,311 | How to get the original dataset name with username? | open | ### Feature request
The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.
The solution used now is to get the dataset name, config and split, then `... | true | 2024-12-08T07:18:14Z | 2025-01-09T10:48:02Z | null | npuichigo | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7311 | false | [
"Hi ! why not pass the dataset id to Ray and let it check the parquet files ? Or pass the parquet files lists directly ?",
"I'm not sure why ray design an API like this to accept a `Dataset` object, so they need to verify the `Dataset` is the original one and use the `DatasetInfo` to query the huggingface hub. I'... |
2,724,830,603 | 7,310 | Enable the Audio Feature to decode / read with an offset + duration | open | ### Feature request
For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in t... | true | 2024-12-07T22:01:44Z | 2024-12-09T21:09:46Z | null | TParcollet | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7310 | false | [
"Hi ! What about having audio + start + duration columns and enable something like this ?\r\n\r\n```python\r\nfor example in ds:\r\n array = example[\"audio\"].read(start=example[\"start\"], frames=example[\"duration\"])\r\n```",
"Hi @lhoestq, this would work with a file-based dataset but would be terrible for... |
2,729,738,963 | 7,315 | Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library | open | #### **Problem Description**
Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer:
- Defaults to generic columns like `label` with `null` values if no explicit mapping is provide... | true | 2024-12-07T16:37:12Z | 2024-12-11T11:05:22Z | null | diarray-hub | NONE | null | null | 13 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7315 | false | [
"Hi @diarray-hub , thanks for opening the issue :) Let me ping @lhoestq and @severo from the dataset viewer team :hugs: ",
"amazing :)",
"Hi ! why not modify the manifest.json file directly ? this way users see in the viewer the dataset as is instead which makes it easier to use using e.g. the `datasets` librar... |
2,723,636,931 | 7,309 | Faster parquet streaming + filters with predicate pushdown | closed | ParquetFragment.to_batches uses a buffered stream to read parquet data, which makes streaming faster (x2 on my laptop).
I also added the `filters` config parameter to support filtering with predicate pushdown, e.g.
```python
from datasets import load_dataset
filters = [('problem_source', '==', 'math')]
ds = ... | true | 2024-12-06T18:01:54Z | 2024-12-07T23:32:30Z | 2024-12-07T23:32:28Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7309 | 2024-12-07T23:32:28Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7309 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7309). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,720,244,889 | 7,307 | refactor: remove unnecessary else | open | true | 2024-12-05T12:11:09Z | 2024-12-06T15:11:33Z | null | HarikrishnanBalagopal | NONE | https://github.com/huggingface/datasets/pull/7307 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7307 | true | [] | |
2,719,807,464 | 7,306 | Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values). | open | ### Describe the bug
When creating a dataset from a list of datapoints, information is lost of the individual items.
Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below.
-> What is the best way to create... | true | 2024-12-05T09:07:53Z | 2024-12-05T09:09:38Z | null | ai-nikolai | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7306 | false | [] |
2,715,907,267 | 7,305 | Build Documentation Test Fails Due to "Bad Credentials" Error | open | ### Describe the bug
The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors.
### Steps to reproduce the bug
1. Trigger the `build... | true | 2024-12-03T20:22:54Z | 2025-01-08T22:38:14Z | null | ruidazeng | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7305 | false | [
"how were you able to fix this please?",
"> how were you able to fix this please?\r\n\r\nI was not able to fix this."
] |
2,715,179,811 | 7,304 | Update iterable_dataset.py | closed | close https://github.com/huggingface/datasets/issues/7297 | true | 2024-12-03T14:25:42Z | 2024-12-03T14:28:10Z | 2024-12-03T14:27:02Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7304 | 2024-12-03T14:27:02Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7304 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7304). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,705,729,696 | 7,303 | DataFilesNotFoundError for datasets LM1B | closed | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b
### Steps to reproduce the bug
`dataset = datasets.load_dataset('lm1b', split=split)`
### Expected behavior
`Traceback (most recent call last):
File "/home/hml/projects/DeepLearning/Generative_model/Diffusio... | true | 2024-11-29T17:27:45Z | 2024-12-11T13:22:47Z | 2024-12-11T13:22:47Z | hml1996-fight | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7303 | false | [
"Hi ! Can you try with a more recent version of `datasets` ? Also you might need to pass trust_remote_code=True since it's a script based dataset"
] |
2,702,626,386 | 7,302 | Let server decide default repo visibility | closed | Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organi... | true | 2024-11-28T16:01:13Z | 2024-11-29T17:00:40Z | 2024-11-29T17:00:38Z | Wauplin | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7302 | 2024-11-29T17:00:38Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7302 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7302). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"No need for a specific version of huggingface_hub to avoid a breaking change no (it's a... |
2,701,813,922 | 7,301 | update load_dataset doctring | closed | - remove canonical dataset name
- remove dataset script logic
- add streaming info
- clearer download and prepare steps | true | 2024-11-28T11:19:20Z | 2024-11-29T10:31:43Z | 2024-11-29T10:31:40Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7301 | 2024-11-29T10:31:40Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7301 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7301). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,701,424,320 | 7,300 | fix: update elasticsearch version | closed | This should fix the `test_py311 (windows latest, deps-latest` errors.
```
=========================== short test summary info ===========================
ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead.
ERROR tests/test_search.py - AttributeE... | true | 2024-11-28T09:14:21Z | 2024-12-03T14:36:56Z | 2024-12-03T14:24:42Z | ruidazeng | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7300 | 2024-12-03T14:24:42Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7300 | true | [
"May I request a review @lhoestq",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7300). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,695,378,251 | 7,299 | Efficient Image Augmentation in Hugging Face Datasets | open | ### Describe the bug
I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient.
... | true | 2024-11-26T16:50:32Z | 2024-11-26T16:53:53Z | null | fabiozappo | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7299 | false | [] |
2,694,196,968 | 7,298 | loading dataset issue with load_dataset() when training controlnet | open | ### Describe the bug
i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work?
would appreciate if someone can explain why ... | true | 2024-11-26T10:50:18Z | 2024-11-26T10:50:18Z | null | sarahahtee | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7298 | false | [] |
2,683,977,430 | 7,297 | wrong return type for `IterableDataset.shard()` | closed | ### Describe the bug
`IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy.
### Steps to reproduce the bug
look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)?
### Expected ... | true | 2024-11-22T17:25:46Z | 2024-12-03T14:27:27Z | 2024-12-03T14:27:03Z | ysngshn | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7297 | false | [
"Oops my bad ! thanks for reporting"
] |
2,675,573,974 | 7,296 | Remove upper version limit of fsspec[http] | closed | true | 2024-11-20T11:29:16Z | 2025-03-06T04:47:04Z | 2025-03-06T04:47:01Z | cyyever | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7296 | null | 0 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7296 | true | [] | |
2,672,003,384 | 7,295 | [BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'` | open | ### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kw... | true | 2024-11-19T12:23:36Z | 2024-11-19T13:01:53Z | null | casper-hansen | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7295 | false | [] |
2,668,663,130 | 7,294 | Remove `aiohttp` from direct dependencies | closed | The dependency is only used for catching an exception from other code. That can be done with an import guard. | true | 2024-11-18T14:00:59Z | 2025-05-07T14:27:18Z | 2025-05-07T14:27:17Z | akx | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7294 | 2025-05-07T14:27:17Z | 0 | 1 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7294 | true | [] |
2,664,592,054 | 7,293 | Updated inconsistent output in documentation examples for `ClassLabel` | closed | fix #7129
@stevhliu | true | 2024-11-16T16:20:57Z | 2024-12-06T11:33:33Z | 2024-12-06T11:32:01Z | sergiopaniego | MEMBER | https://github.com/huggingface/datasets/pull/7293 | 2024-12-06T11:32:01Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7293 | true | [
"Updated! 😄 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7293). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq, can you help with this failing test please? 🙏 "
] |
2,664,250,855 | 7,292 | DataFilesNotFoundError for datasets `OpenMol/PubChemSFT` | closed | ### Describe the bug
Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('OpenMol/PubChemSFT')
```
### Expected behavior
```
-----------------------------------------------------------------------... | true | 2024-11-16T11:54:31Z | 2024-11-19T00:53:00Z | 2024-11-19T00:52:59Z | xnuohz | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7292 | false | [
"Hi ! If the dataset owner uses `push_to_hub()` instead of `save_to_disk()` and upload the local files it will fix the issue.\r\nRight now `datasets` sees the train/test/valid pickle files but they are not supported file formats.",
"Alternatively you can load the arrow file instead:\r\n\r\n```python\r\nfrom datas... |
2,662,244,643 | 7,291 | Why return_tensors='pt' doesn't work? | open | ### Describe the bug
I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List?

### Steps to reproduce the bug
`",
"> Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If ... |
2,657,620,816 | 7,290 | `Dataset.save_to_disk` hangs when using num_proc > 1 | open | ### Describe the bug
Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.
Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than... | true | 2024-11-14T05:25:13Z | 2025-06-20T06:10:26Z | null | JohannesAck | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7290 | false | [
"I've met the same situations.\r\n\r\nHere's my logs:\r\nnum_proc = 64, I stop it early as it cost **too** much time.\r\n```\r\nSaving the dataset (1540/4775 shards): 32%|███▏ | 47752224/147853764 [15:32:54<132:28:34, 209.89 examples/s]\r\nSaving the dataset (1540/4775 shards): 32%|███▏ | 47754224/14785... |
2,648,019,507 | 7,289 | Dataset viewer displays wrong statists | closed | ### Describe the bug
In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test`... | true | 2024-11-11T03:29:27Z | 2024-11-13T13:02:25Z | 2024-11-13T13:02:25Z | speedcell4 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7289 | false | [
"i think this issue is more for https://github.com/huggingface/dataset-viewer"
] |
2,647,052,280 | 7,288 | Release v3.1.1 | closed | true | 2024-11-10T09:38:15Z | 2024-11-10T09:38:48Z | 2024-11-10T09:38:48Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7288 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7288 | true | [] | |
2,646,958,393 | 7,287 | Support for identifier-based automated split construction | open | ### Feature request
As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure))
It would seem to be pretty useful to also allow splits to be based on ide... | true | 2024-11-10T07:45:19Z | 2024-11-19T14:37:02Z | null | alex-hh | CONTRIBUTOR | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7287 | false | [
"Hi ! You can already configure the README.md to have multiple sets of splits, e.g.\r\n\r\n```yaml\r\nconfigs:\r\n- config_name: my_first_set_of_split\r\n data_files:\r\n - split: train\r\n path: *.csv\r\n- config_name: my_second_set_of_split\r\n data_files:\r\n - split: train\r\n path: train-*.csv\r\n -... |
2,645,350,151 | 7,286 | Concurrent loading in `load_from_disk` - `num_proc` as a param | closed | ### Feature request
https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere
### Motivation
Make loading large datasets from disk faster
### Your contribution
Happy to contribute if given pointers | true | 2024-11-08T23:21:40Z | 2024-11-09T16:14:37Z | 2024-11-09T16:14:37Z | unography | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7286 | false | [] |
2,644,488,598 | 7,285 | Release v3.1.0 | closed | true | 2024-11-08T16:17:58Z | 2024-11-08T16:18:05Z | 2024-11-08T16:18:05Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7285 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7285 | true | [] | |
2,644,302,386 | 7,284 | support for custom feature encoding/decoding | closed | Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221
(only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?) | true | 2024-11-08T15:04:08Z | 2024-11-21T16:09:47Z | 2024-11-21T16:09:47Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7284 | 2024-11-21T16:09:47Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7284 | true | [
"@lhoestq ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7284). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,642,537,708 | 7,283 | Allow for variation in metadata file names as per issue #7123 | open | Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality.
Resolves #7123. | true | 2024-11-08T00:44:47Z | 2024-11-08T00:44:47Z | null | egrace479 | NONE | https://github.com/huggingface/datasets/pull/7283 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7283 | true | [] |
2,642,075,491 | 7,282 | Faulty datasets.exceptions.ExpectedMoreSplitsError | open | ### Describe the bug
Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`.
Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`.
Her... | true | 2024-11-07T20:15:01Z | 2024-11-07T20:15:42Z | null | meg-huggingface | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7282 | false | [] |
2,640,346,339 | 7,281 | File not found error | open | ### Describe the bug
I get a FileNotFoundError:
<img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87">
### Steps to reproduce the bug
See screenshot.
### Expected behavior
I want to load one audiofile from the dataset.
### Environmen... | true | 2024-11-07T09:04:49Z | 2024-11-07T09:22:43Z | null | MichielBontenbal | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7281 | false | [
"Link to the dataset: https://huggingface.co/datasets/MichielBontenbal/UrbanSounds "
] |
2,639,977,077 | 7,280 | Add filename in error message when ReadError or similar occur | open | Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files.
Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption).
However the errors often only let me k... | true | 2024-11-07T06:00:53Z | 2024-11-20T13:23:12Z | null | elisa-aleman | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7280 | false | [
"Hi Elisa, please share the error traceback here, and if you manage to find the location in the `datasets` code where the error occurs, feel free to open a PR to add the necessary logging / improve the error message.",
"> please share the error traceback\n\nI don't have access to it but it should be during [this ... |
2,635,813,932 | 7,279 | Feature proposal: Stacking, potentially heterogeneous, datasets | open | ### Introduction
Hello there,
I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features.... | true | 2024-11-05T15:40:50Z | 2024-11-05T15:40:50Z | null | TimCares | NONE | https://github.com/huggingface/datasets/pull/7279 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7279 | true | [] |
2,633,436,151 | 7,278 | Let soundfile directly read local audio files | open | - [x] Fixes #7276 | true | 2024-11-04T17:41:13Z | 2024-11-18T14:01:25Z | null | fawazahmed0 | NONE | https://github.com/huggingface/datasets/pull/7278 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7278 | true | [] |
2,632,459,184 | 7,277 | Add link to video dataset | closed | This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs.
cc @mfarre | true | 2024-11-04T10:45:12Z | 2024-11-04T17:05:06Z | 2024-11-04T17:05:06Z | NielsRogge | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7277 | 2024-11-04T17:05:06Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7277 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7277). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,631,917,431 | 7,276 | Accessing audio dataset value throws Format not recognised error | open | ### Describe the bug
Accessing audio dataset value throws `Format not recognised error`
### Steps to reproduce the bug
**code:**
```py
from datasets import load_dataset
dataset = load_dataset("fawazahmed0/bug-audio")
for data in dataset["train"]:
print(data)
```
**output:**
```bash
(mypy) ... | true | 2024-11-04T05:59:13Z | 2024-11-09T18:51:52Z | null | fawazahmed0 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7276 | false | [
"Hi ! can you try if this works ?\r\n\r\n```python\r\nimport soundfile as sf\r\n\r\nwith open('C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3', 'rb') as f:\r\n print(sf.read(... |
2,631,713,397 | 7,275 | load_dataset | open | ### Describe the bug
I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapp... | true | 2024-11-04T03:01:44Z | 2024-11-04T03:01:44Z | null | santiagobp99 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7275 | false | [] |
2,629,882,821 | 7,274 | [MINOR:TYPO] Fix typo in exception text | closed | true | 2024-11-01T21:15:29Z | 2025-05-21T13:17:20Z | 2025-05-21T13:17:20Z | cakiki | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7274 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7274 | true | [] | |
2,628,896,492 | 7,273 | Raise error for incorrect JSON serialization | closed | Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`.
Issue: #7037
Related PRs:
#7039 #7181 | true | 2024-11-01T11:54:35Z | 2024-11-18T11:25:01Z | 2024-11-18T11:25:01Z | varadhbhatnagar | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7273 | 2024-11-18T11:25:01Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7273 | true | [
"PTAL @lhoestq @albertvillanova ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7273). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,627,223,390 | 7,272 | fix conda release worlflow | closed | true | 2024-10-31T15:56:19Z | 2024-10-31T15:58:35Z | 2024-10-31T15:57:29Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7272 | 2024-10-31T15:57:29Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7272 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,627,135,540 | 7,271 | Set dev version | closed | true | 2024-10-31T15:22:51Z | 2024-10-31T15:25:27Z | 2024-10-31T15:22:59Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7271 | 2024-10-31T15:22:59Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7271 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7271). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,627,107,016 | 7,270 | Release: 3.1.0 | closed | true | 2024-10-31T15:10:01Z | 2024-10-31T15:14:23Z | 2024-10-31T15:14:20Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7270 | 2024-10-31T15:14:20Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7270 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7270). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,626,873,843 | 7,269 | Memory leak when streaming | open | ### Describe the bug
I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable.
I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ... | true | 2024-10-31T13:33:52Z | 2024-11-18T11:46:07Z | null | Jourdelune | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7269 | false | [
"I seem to have encountered the same problem when loading non streaming datasets. load_from_disk. Causing hundreds of GB of memory, but the dataset actually only has 50GB",
"FYI when streaming parquet data, only one row group per worker is loaded in memory at a time.\r\n\r\nBtw for datasets of embeddings you can ... |
2,626,664,687 | 7,268 | load_from_disk | open | ### Describe the bug
I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that?
### Steps to reproduce the bug
when trying ... | true | 2024-10-31T11:51:56Z | 2024-10-31T14:43:47Z | null | ghaith-mq | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7268 | false | [
"Hello, It's an interesting issue here. I have the same problem, I have a local dataset and I want to push the dataset to the hub but huggingface does a copy of it.\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"webdataset\", data_files=\"/media/works/data/*.tar\") # copy here\r\n... |
2,626,490,029 | 7,267 | Source installation fails on Macintosh with python 3.10 | open | ### Describe the bug
Hi,
Decord is a dev dependency not maintained since couple years.
It does not have an ARM package available rendering it uninstallable on non-intel based macs
Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem.
Happy to... | true | 2024-10-31T10:18:45Z | 2024-11-04T22:18:06Z | null | mayankagarwals | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7267 | false | [
"I encountered the same problem on M1, a workaround I did was to simply comment out the dependency:\r\n\r\n```python\r\n...\r\n \"zstandard\",\r\n \"polars[timezone]>=0.20.0\",\r\n # \"decord==0.6.0\",\r\n]\r\n```\r\n\r\nThis worked for me as the adjustments I did to the code do not use the dependency, but... |
2,624,666,087 | 7,266 | The dataset viewer should be available soon. Please retry later. | closed | ### Describe the bug
After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.''
### Steps to reproduce the bug
dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT
### Expected behavior
Present the dataset viewer.
### Environment info
NA | true | 2024-10-30T16:32:00Z | 2024-10-31T03:48:11Z | 2024-10-31T03:48:10Z | viiika | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7266 | false | [
"Waiting is all you need. 10 hours later, it works."
] |
2,624,090,418 | 7,265 | Disallow video push_to_hub | closed | true | 2024-10-30T13:21:55Z | 2024-10-30T13:36:05Z | 2024-10-30T13:36:02Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7265 | 2024-10-30T13:36:02Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7265 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7265). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,624,047,640 | 7,264 | fix docs relative links | closed | true | 2024-10-30T13:07:34Z | 2024-10-30T13:10:13Z | 2024-10-30T13:09:02Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7264 | 2024-10-30T13:09:02Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7264 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7264). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,621,844,054 | 7,263 | Small addition to video docs | closed | true | 2024-10-29T16:58:37Z | 2024-10-29T17:01:05Z | 2024-10-29T16:59:10Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7263 | 2024-10-29T16:59:10Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7263 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7263). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,620,879,059 | 7,262 | Allow video with disabeld decoding without decord | closed | for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults) | true | 2024-10-29T10:54:04Z | 2024-10-29T10:56:19Z | 2024-10-29T10:55:37Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7262 | 2024-10-29T10:55:37Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7262 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7262). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,620,510,840 | 7,261 | Cannot load the cache when mapping the dataset | open | ### Describe the bug
I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset.
with accelerator.main_process_... | true | 2024-10-29T08:29:40Z | 2025-03-24T13:27:55Z | null | zhangn77 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7261 | false | [
"@zhangn77 Hi ,have you solved this problem? I encountered the same issue during training. Could we discuss it?",
"I also encountered the same problem, why is that?"
] |
2,620,014,285 | 7,260 | cache can't cleaned or disabled | open | ### Describe the bug
I tried following ways, the cache can't be disabled.
I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help!
```python
from datasets import ... | true | 2024-10-29T03:15:28Z | 2024-12-11T09:04:52Z | null | charliedream1 | NONE | null | null | 1 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7260 | false | [
"Hey I have a similar problem and found a workaround using [temporary directories](https://docs.python.org/3/library/tempfile.html):\r\n\r\n```python\r\nfrom tempfile import TemporaryDirectory\r\n\r\nwith TemporaryDirectory() as cache_dir:\r\n data = load_dataset('json', data_files=save_local_path, split='train'... |
2,618,909,241 | 7,259 | Don't embed videos | closed | don't include video bytes when running download_and_prepare(format="parquet")
this also affects push_to_hub which will just upload the local paths of the videos though | true | 2024-10-28T16:25:10Z | 2024-10-28T16:27:34Z | 2024-10-28T16:26:01Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7259 | 2024-10-28T16:26:01Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7259 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7259). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,618,758,399 | 7,258 | Always set non-null writer batch size | closed | bug introduced in #7230, it was preventing the Viewer limit writes to work | true | 2024-10-28T15:26:14Z | 2024-10-28T15:28:41Z | 2024-10-28T15:26:29Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7258 | 2024-10-28T15:26:29Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7258 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7258). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,618,602,173 | 7,257 | fix ci for pyarrow 18 | closed | true | 2024-10-28T14:31:34Z | 2024-10-28T14:34:05Z | 2024-10-28T14:31:44Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7257 | 2024-10-28T14:31:44Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7257 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7257). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,618,580,188 | 7,256 | Retry all requests timeouts | closed | as reported in https://github.com/huggingface/datasets/issues/6843 | true | 2024-10-28T14:23:16Z | 2024-10-28T14:56:28Z | 2024-10-28T14:56:26Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7256 | 2024-10-28T14:56:26Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7256 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7256). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,618,540,355 | 7,255 | fix decord import | closed | delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault) | true | 2024-10-28T14:08:19Z | 2024-10-28T14:10:43Z | 2024-10-28T14:09:14Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7255 | 2024-10-28T14:09:14Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7255 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7255). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,616,174,996 | 7,254 | mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy") | open | ### Describe the bug
If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `f... | true | 2024-10-26T22:06:27Z | 2024-10-26T22:07:37Z | null | Akhil-CM | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7254 | false | [
"It seems that https://github.com/huggingface/datasets/issues/5517 is exactly the same issue.\r\n\r\nIt was mentioned there that this would be fixed in version 3.x"
] |
2,615,862,202 | 7,253 | Unable to upload a large dataset zip either from command line or UI | open | ### Describe the bug
Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB.
<img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34">
<img width="755" alt="image" src="https://githu... | true | 2024-10-26T13:17:06Z | 2024-10-26T13:17:06Z | null | vakyansh | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7253 | false | [] |
2,613,795,544 | 7,252 | Add IterableDataset.shard() | closed | Will be useful to distribute a dataset across workers (other than pytorch) like spark
I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index)
Breaking chang... | true | 2024-10-25T11:07:12Z | 2025-03-21T03:58:43Z | 2024-10-25T15:45:22Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7252 | 2024-10-25T15:45:21Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7252 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7252). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Is there some way to get this to work for pytorch dataloader workers?\r\n\r\neg. start ... |
2,612,097,435 | 7,251 | Missing video docs | closed | true | 2024-10-24T16:45:12Z | 2024-10-24T16:48:29Z | 2024-10-24T16:48:27Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7251 | 2024-10-24T16:48:27Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7251 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7251). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,612,041,969 | 7,250 | Basic XML support (mostly copy pasted from text) | closed | enable the viewer for datasets like https://huggingface.co/datasets/FrancophonIA/e-calm (there will be more and more apparently) | true | 2024-10-24T16:14:50Z | 2024-10-24T16:19:18Z | 2024-10-24T16:19:16Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7250 | 2024-10-24T16:19:16Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7250 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7250). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,610,136,636 | 7,249 | How to debugging | open | ### Describe the bug
I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the ... | true | 2024-10-24T01:03:51Z | 2024-10-24T01:03:51Z | null | ShDdu | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7249 | false | [] |
2,609,926,089 | 7,248 | ModuleNotFoundError: No module named 'datasets.tasks' | open | ### Describe the bug
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_R... | true | 2024-10-23T21:58:25Z | 2024-10-24T17:00:19Z | null | shoowadoo | NONE | null | null | 2 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7248 | false | [
"tasks was removed in v3: #6999 \r\n\r\nI also don't see why TextClassification is imported, since it's not used after. So the fix is simple: delete this line.",
"I opened https://huggingface.co/datasets/knowledgator/events_classification_biotech/discussions/7 to remove the line, hopefully the dataset owner will ... |
2,606,230,029 | 7,247 | Adding column with dict struction when mapping lead to wrong order | open | ### Describe the bug
in `map()` function, I want to add a new column with a dict structure.
```
def map_fn(example):
example['text'] = {'user': ..., 'assistant': ...}
return example
```
However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset.
Thus I can't concatenate two datasets ... | true | 2024-10-22T18:55:11Z | 2024-10-22T18:55:23Z | null | chchch0109 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7247 | false | [] |
2,605,734,447 | 7,246 | Set dev version | closed | true | 2024-10-22T15:04:47Z | 2024-10-22T15:07:31Z | 2024-10-22T15:04:58Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7246 | 2024-10-22T15:04:58Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7246 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7246). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,605,701,235 | 7,245 | Release: 3.0.2 | closed | true | 2024-10-22T14:53:34Z | 2024-10-22T15:01:50Z | 2024-10-22T15:01:47Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7245 | 2024-10-22T15:01:47Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7245 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7245). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,605,461,515 | 7,244 | use huggingface_hub offline mode | closed | and better handling of LocalEntryNotfoundError cc @Wauplin
follow up to #7234 | true | 2024-10-22T13:27:16Z | 2024-10-22T14:10:45Z | 2024-10-22T14:10:20Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7244 | 2024-10-22T14:10:20Z | 1 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7244 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7244). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,602,853,172 | 7,243 | ArrayXD with None as leading dim incompatible with DatasetCardData | open | ### Describe the bug
Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones
@lhoestq
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Array2D, Dataset, Features, load_dataset
def examples_generator():... | true | 2024-10-21T15:08:13Z | 2024-10-22T14:18:10Z | null | alex-hh | CONTRIBUTOR | null | null | 5 | 1 | 0 | 1 | null | false | [] | https://github.com/huggingface/datasets/issues/7243 | false | [
"It looks like `CardData` in `huggingface_hub` removes None values where it shouldn't. Indeed it calls `_remove_none` on the return of `to_dict()`:\r\n\r\n```python\r\n def to_dict(self) -> Dict[str, Any]:\r\n \"\"\"Converts CardData to a dict.\r\n\r\n Returns:\r\n `dict`: CardData repre... |
2,599,899,156 | 7,241 | `push_to_hub` overwrite argument | closed | ### Feature request
Add an `overwrite` argument to the `push_to_hub` method.
### Motivation
I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials.
### Your contribution
I can create a PR. | true | 2024-10-20T03:23:26Z | 2024-10-24T17:39:08Z | 2024-10-24T17:39:08Z | ceferisbarov | NONE | null | null | 9 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7241 | false | [
"Hi ! Do you mean deleting all the files ? or erasing the repository git history before push_to_hub ?",
"Hi! I meant the latter.",
"I don't think there is a `huggingface_hub` utility to erase the git history, cc @Wauplin maybe ?",
"What is the goal exactly of deleting all the git history without deleting the ... |
2,598,980,027 | 7,240 | Feature Request: Add functionality to pass split types like train, test in DatasetDict.map | closed | Hello datasets!
We often encounter situations where we need to preprocess data differently depending on split types such as train, valid, and test.
However, while DatasetDict.map has features to pass rank or index, there's no functionality to pass split types.
Therefore, I propose adding a 'with_splits' parame... | true | 2024-10-19T09:59:12Z | 2025-01-06T08:04:08Z | 2025-01-06T08:04:08Z | jp1924 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7240 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7240 | true | [] |
2,598,409,993 | 7,238 | incompatibily issue when using load_dataset with datasets==3.0.1 | open | ### Describe the bug
There is a bug when using load_dataset with dataset version at 3.0.1 .
Please see below in the "steps to reproduce the bug".
To resolve the bug, I had to downgrade to version 2.21.0
OS: Ubuntu 24 (AWS instance)
Python: same bug under 3.12 and 3.10
The error I had was:
Traceback (most rec... | true | 2024-10-18T21:25:23Z | 2024-12-09T09:49:32Z | null | jupiterMJM | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7238 | false | [
"Hi! I'm also getting the same issue - have you been able to find a solution to this? ",
"From what I remember, I stayed at the \"downgraded\" version of dataset (2.21.0)"
] |
2,597,358,525 | 7,236 | [MINOR:TYPO] Update arrow_dataset.py | closed | Fix wrong link.
csv kwargs docstring link was pointing to pandas json docs. | true | 2024-10-18T12:10:03Z | 2024-10-24T15:06:43Z | 2024-10-24T15:06:43Z | cakiki | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7236 | 2024-10-24T15:06:43Z | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7236 | true | [] |
2,594,220,624 | 7,234 | No need for dataset_info | closed | save a useless call to /api/datasets/repo_id | true | 2024-10-17T09:54:03Z | 2024-10-22T12:30:40Z | 2024-10-21T16:44:34Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7234 | 2024-10-21T16:44:34Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7234 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7234). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"merging this one for now, let me know if you'd like to see additional changes for error... |
2,593,903,113 | 7,233 | 数据集数量问题 | open | ### Describe the bug
这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错
### Steps to reproduce the bug
1.
这里我的数据集可以微调的最后两个数据集是:
{
"messages": [
{
"role": "user",
"content": "完成校正装置设计后需要进行哪些工作?"
},
{
"role": "assistant",
"content": "一旦完成校正装置设计后,需要进行系统实际调校... | true | 2024-10-17T07:41:44Z | 2024-10-17T07:41:44Z | null | want-well | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7233 | false | [] |
2,593,720,548 | 7,232 | (Super tiny doc update) Mention to_polars | closed | polars is also quite popular now, thus this tiny update can tell users polars is supported | true | 2024-10-17T06:08:53Z | 2024-10-24T23:11:05Z | 2024-10-24T15:06:16Z | fzyzcjy | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7232 | 2024-10-24T15:06:16Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7232 | true | [
"You are welcome!"
] |
2,592,011,737 | 7,231 | Fix typo in image dataset docs | closed | Fix typo in image dataset docs.
Typo reported by @datavistics. | true | 2024-10-16T14:05:46Z | 2024-10-16T17:06:21Z | 2024-10-16T17:06:19Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7231 | 2024-10-16T17:06:19Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7231 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7231). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,589,531,942 | 7,230 | Video support | closed | (wip and experimental)
adding the `Video` type based on `VideoReader` from `decord`
```python
>>>from datasets import load_dataset
>>> ds = load_dataset("path/to/videos", split="train").with_format("torch")
>>> print(ds[0]["video"])
<decord.video_reader.VideoReader object at 0x337a47910>
>>> print(ds[0]["vid... | true | 2024-10-15T18:17:29Z | 2024-10-24T16:39:51Z | 2024-10-24T16:39:50Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7230 | 2024-10-24T16:39:50Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7230 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7230). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,588,847,398 | 7,229 | handle config_name=None in push_to_hub | closed | This caught me out - thought it might be better to explicitly handle None? | true | 2024-10-15T13:48:57Z | 2024-10-24T17:51:52Z | 2024-10-24T17:51:52Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7229 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7229 | true | [
"not sure it's a good idea, we always need a config name so better have the correct default and not support None (which could lead to think it doesn't have a config name, while it does)"
] |
2,587,310,094 | 7,228 | Composite (multi-column) features | open | ### Feature request
Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding
Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of... | true | 2024-10-14T23:59:19Z | 2024-10-15T11:17:15Z | null | alex-hh | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7228 | false | [] |
2,587,048,312 | 7,227 | fast array extraction | open | Implements #7210 using method suggested in https://github.com/huggingface/datasets/pull/7207#issuecomment-2411789307
```python
import numpy as np
from datasets import Dataset, Features, Array3D
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float3... | true | 2024-10-14T20:51:32Z | 2025-01-28T09:39:26Z | null | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7227 | null | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7227 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7227). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I've updated the most straightforward failing test cases - lmk if you agree with those.... |
2,586,920,351 | 7,226 | Add R as a How to use from the Polars (R) Library as an option | open | ### Feature request
The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd
## Add Polars (R) option
The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.
```r
library(polars)
... | true | 2024-10-14T19:56:07Z | 2024-10-14T19:57:13Z | null | ran-codes | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7226 | false | [] |
2,586,229,216 | 7,225 | Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result | open | ### Describe the bug
We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push.
### Steps to reproduce the bug
A basic kotlin application:
```
val person = PersonIdent(
"padmalcom",
"padmalcom@sth.com"
)
... | true | 2024-10-14T14:33:06Z | 2024-10-14T14:33:06Z | null | padmalcom | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7225 | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.