id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
2,454,418,130
https://api.github.com/repos/huggingface/datasets/issues/7094
https://github.com/huggingface/datasets/pull/7094
7,094
Add Arabic Docs to Datasets
open
0
2024-08-07T21:53:06
2024-08-07T21:53:06
null
AhmedAlmaghz
[]
Translate Docs into Arabic issue-number : #7093 [Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) [English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx) @stevhliu
true
2,454,413,074
https://api.github.com/repos/huggingface/datasets/issues/7093
https://github.com/huggingface/datasets/issues/7093
7,093
Add Arabic Docs to datasets
open
0
2024-08-07T21:48:05
2024-08-07T21:48:05
null
AhmedAlmaghz
[ "enhancement" ]
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
false
2,451,393,658
https://api.github.com/repos/huggingface/datasets/issues/7092
https://github.com/huggingface/datasets/issues/7092
7,092
load_dataset with multiple jsonlines files interprets datastructure too early
open
5
2024-08-06T17:42:55
2024-08-08T16:35:01
null
Vipitis
[]
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure. ```python from datasets import load_dataset ds = load_dataset("json", data_dir="./data/annotated/api") ``` you get a long error trace, where in the middle it says something like ```cs TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null ``` toy example: (on request) ### Expected behavior Some suggestions 1. give a better error message to the user 2. consider all files before deciding on a data structure for a given column. 3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow) as a workaround I have lazily implemented the following (essentially step 2) ```python import os import jsonlines import datasets api_files = os.listdir("./data/annotated/api") api_files = [f"./data/annotated/api/{f}" for f in api_files] api_file_contents = [] for f in api_files: with jsonlines.open(f) as reader: for obj in reader: api_file_contents.append(obj) ds = datasets.Dataset.from_list(api_file_contents) ``` this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place). ### Environment info - `datasets` version: 2.20.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
false
2,449,699,490
https://api.github.com/repos/huggingface/datasets/issues/7090
https://github.com/huggingface/datasets/issues/7090
7,090
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
open
0
2024-08-06T00:35:05
2024-08-06T00:35:05
null
yurivict
[]
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: 'python' ``` ### Steps to reproduce the bug regular test run using PyTest ### Expected behavior n/a ### Environment info FreeBSD 14.1
false
2,449,479,500
https://api.github.com/repos/huggingface/datasets/issues/7089
https://github.com/huggingface/datasets/issues/7089
7,089
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
open
0
2024-08-05T21:05:11
2024-08-05T21:05:11
null
yurivict
[]
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
false
2,447,383,940
https://api.github.com/repos/huggingface/datasets/issues/7088
https://github.com/huggingface/datasets/issues/7088
7,088
Disable warning when using with_format format on tensors
open
0
2024-08-05T00:45:50
2024-08-05T00:45:50
null
Haislich
[ "enhancement" ]
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloader""" TRAIN = "train" TEST = "test" VAL = "validation" class ImageNetDataLoader(DataLoader): """Create an ImageNetDataloader""" _preprocess_transform = transforms.Compose( [ transforms.Resize(256), transforms.CenterCrop(224), ] ) def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN): dataset = ( load_dataset( "imagenet-1k", split=split, trust_remote_code=True, streaming=True, ) .with_format("torch") .map(self._preprocess) ) super().__init__(dataset=dataset, batch_size=batch_size) def _preprocess(self, data): if data["image"].shape[0] < 3: data["image"] = data["image"].repeat(3, 1, 1) data["image"] = self._preprocess_transform(data["image"].float()) return data if __name__ == "__main__": dataloader = ImageNetDataLoader(batch_size=2) for batch in dataloader: print(batch["image"]) break ``` This will trigger an user warning : ```bash datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` ### Motivation This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`. This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`. In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it: - https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor - https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary. ### Your contribution A solution that I found to be working is to change the current way of doing it: ```python return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` To: ```python if (isinstance(value, torch.Tensor)): tensor = value.clone().detach() if self.torch_tensor_kwargs.get('requires_grad', False): tensor.requires_grad_() return tensor else: return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ```
false
2,447,158,643
https://api.github.com/repos/huggingface/datasets/issues/7087
https://github.com/huggingface/datasets/issues/7087
7,087
Unable to create dataset card for Lushootseed language
closed
2
2024-08-04T14:27:04
2024-08-06T06:59:23
2024-08-06T06:59:22
vaishnavsudarshan
[ "enhancement" ]
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options? ### Motivation I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents. ### Your contribution I can submit a pull request
false
2,445,516,829
https://api.github.com/repos/huggingface/datasets/issues/7086
https://github.com/huggingface/datasets/issues/7086
7,086
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
open
1
2024-08-02T18:12:23
2025-06-16T18:43:29
null
tginart
[]
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset is in .cache/huggingface/datasets 5. ??? ### Expected behavior We should not run into API rate limits if we have cached the dataset ### Environment info datasets 2.16.0 python 3.10.4
false
2,440,008,618
https://api.github.com/repos/huggingface/datasets/issues/7085
https://github.com/huggingface/datasets/issues/7085
7,085
[Regression] IterableDataset is broken on 2.20.0
closed
3
2024-07-31T13:01:59
2024-08-22T14:49:37
2024-08-22T14:49:07
AjayP13
[]
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't. ### Steps to reproduce the bug Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`) ``` #!/bin/bash # List of dataset versions to test versions=("2.17.0" "2.20.0") # Loop through each version for version in "${versions[@]}"; do # Install the specific version of the datasets library pip3 install -q datasets=="$version" 2>/dev/null # Run the Python script python3 - <<EOF from datasets import IterableDataset from datasets.features.features import Features, Value def test_gen(): yield from [{"foo": i} for i in range(10)] features = Features([("foo", Value("int64"))]) d = IterableDataset.from_generator(test_gen, features=features) mapped = d.map(lambda row: {"foo": row["foo"] * 2}) column = mapped.select_columns(["foo"]) print("Version $version - Iterate Once:", list(column)) print("Version $version - Iterate Twice:", list(column)) EOF done ``` The output looks like this: ``` Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Twice: [] ``` ### Expected behavior The expected behavior is it version 2.20.0 should behave the same as 2.17.0. ### Environment info `datasets==2.20.0` on any platform.
false
2,439,519,534
https://api.github.com/repos/huggingface/datasets/issues/7084
https://github.com/huggingface/datasets/issues/7084
7,084
More easily support streaming local files
open
0
2024-07-31T09:03:15
2024-07-31T09:05:58
null
fschlatt
[ "enhancement" ]
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`. Streaming the files locally does not work well for both file types for two different reasons. **Arrow files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue. **Parquet files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other". ### Your contribution I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added. IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083
false
2,439,518,466
https://api.github.com/repos/huggingface/datasets/issues/7083
https://github.com/huggingface/datasets/pull/7083
7,083
fix streaming from arrow files
closed
0
2024-07-31T09:02:42
2024-08-30T15:17:03
2024-08-30T15:17:03
fschlatt
[]
null
true
2,437,354,975
https://api.github.com/repos/huggingface/datasets/issues/7082
https://github.com/huggingface/datasets/pull/7082
7,082
Support HTTP authentication in non-streaming mode
closed
2
2024-07-30T09:25:49
2024-08-08T08:29:55
2024-08-08T08:24:06
albertvillanova
[]
Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode. - Note that currently, HTTP authentication is supported only in streaming mode. For example, this is necessary if a remote HTTP host requires authentication to download the data.
true
2,437,059,657
https://api.github.com/repos/huggingface/datasets/issues/7081
https://github.com/huggingface/datasets/pull/7081
7,081
Set load_from_disk path type as PathLike
closed
2
2024-07-30T07:00:38
2024-07-30T08:30:37
2024-07-30T08:21:50
albertvillanova
[]
Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`.
true
2,434,275,664
https://api.github.com/repos/huggingface/datasets/issues/7080
https://github.com/huggingface/datasets/issues/7080
7,080
Generating train split takes a long time
open
2
2024-07-29T01:42:43
2024-10-02T15:31:22
null
alexanderswerdlow
[]
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,433,363,298
https://api.github.com/repos/huggingface/datasets/issues/7079
https://github.com/huggingface/datasets/issues/7079
7,079
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
closed
17
2024-07-27T08:21:03
2024-09-20T13:26:25
2024-07-27T19:52:30
neoneye
[]
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have seen it. https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1 ### Steps to reproduce the bug this api url: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 respond with: ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Expected behavior return no error with newer datasets. With older datasets I can load the datasets fine. ### Environment info # Browser When I access the api in the browser: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Request headers ``` Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Encoding gzip, deflate, br, zstd Accept-Language en-US,en;q=0.5 Connection keep-alive Host huggingface.co Priority u=1 Sec-Fetch-Dest document Sec-Fetch-Mode navigate Sec-Fetch-Site cross-site Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0 ``` ### Response headers ``` X-Firefox-Spdy h2 access-control-allow-origin https://huggingface.co access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range content-length 80 content-type application/json; charset=utf-8 cross-origin-opener-policy same-origin date Fri, 26 Jul 2024 19:09:45 GMT etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c" referrer-policy strict-origin-when-cross-origin vary Origin via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront) x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ== x-amz-cf-pop CPH50-C1 x-cache Error from cloudfront x-error-message Internal Error - We're working hard to fix this as soon as possible! x-powered-by huggingface-moon x-request-id Root=1-66a3f479-026417465ef42f49349fdca1 ```
false
2,433,270,271
https://api.github.com/repos/huggingface/datasets/issues/7078
https://github.com/huggingface/datasets/pull/7078
7,078
Fix CI test_convert_to_parquet
closed
2
2024-07-27T05:32:40
2024-07-27T05:50:57
2024-07-27T05:44:32
albertvillanova
[]
Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix: - #7074
true
2,432,345,489
https://api.github.com/repos/huggingface/datasets/issues/7077
https://github.com/huggingface/datasets/issues/7077
7,077
column_names ignored by load_dataset() when loading CSV file
open
1
2024-07-26T14:18:04
2024-07-30T07:52:26
null
luismsgomes
[]
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.24.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,432,275,393
https://api.github.com/repos/huggingface/datasets/issues/7076
https://github.com/huggingface/datasets/pull/7076
7,076
🧪 Do not mock create_commit
closed
1
2024-07-26T13:44:42
2024-07-27T05:48:17
2024-07-27T05:48:17
coyotte508
[]
null
true
2,432,027,412
https://api.github.com/repos/huggingface/datasets/issues/7075
https://github.com/huggingface/datasets/pull/7075
7,075
Update required soxr version from pre-release to release
closed
2
2024-07-26T11:24:35
2024-07-26T11:46:52
2024-07-26T11:40:49
albertvillanova
[]
Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0
true
2,431,772,703
https://api.github.com/repos/huggingface/datasets/issues/7074
https://github.com/huggingface/datasets/pull/7074
7,074
Fix CI by temporarily marking test_convert_to_parquet as expected to fail
closed
2
2024-07-26T09:03:33
2024-07-26T09:23:33
2024-07-26T09:16:12
albertvillanova
[]
As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail. Fix #7073. Revert once root cause is fixed.
true
2,431,706,568
https://api.github.com/repos/huggingface/datasets/issues/7073
https://github.com/huggingface/datasets/issues/7073
7,073
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
closed
9
2024-07-26T08:27:41
2024-07-27T05:48:02
2024-07-26T09:16:13
albertvillanova
[]
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756 ``` FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64) Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1. Invalid rev id: refs/pr/1 ``` ``` /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet dataset.push_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub api.preupload_lfs_files( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files _fetch_upload_modes( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn return fn(*args, **kwargs) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes hf_raise_for_status(resp) ```
false
2,430,577,916
https://api.github.com/repos/huggingface/datasets/issues/7072
https://github.com/huggingface/datasets/issues/7072
7,072
nm
closed
0
2024-07-25T17:03:24
2024-07-25T20:36:11
2024-07-25T20:36:11
brettdavies
[]
null
false
2,430,313,011
https://api.github.com/repos/huggingface/datasets/issues/7071
https://github.com/huggingface/datasets/issues/7071
7,071
Filter hangs
open
0
2024-07-25T15:29:05
2024-07-25T15:36:59
null
lucienwalewski
[]
### Describe the bug When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('lcolonn/patfig', split='test') ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') ``` Eventually I ctrl+C and I obtain this stack trace: ``` >>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper out = func(dataset, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter indices = self.map( ^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single batch = apply_function_on_filtered_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function num_examples = len(batch[next(iter(batch.keys()))]) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__ value = self.format(key) ^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format return self.formatter.format_column(self.pa_table.select([key])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column return self.features.decode_column(column, column_name) if self.features else column ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors ^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load n, err_code = decoder.decode(b) ^^^^^^^^^^^^^^^^^ KeyboardInterrupt ``` Warning! This can even seem to cause some computers to crash. ### Expected behavior Should return the filtered dataset ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,430,285,235
https://api.github.com/repos/huggingface/datasets/issues/7070
https://github.com/huggingface/datasets/issues/7070
7,070
how set_transform affects batch size?
open
0
2024-07-25T15:19:34
2024-07-25T15:19:34
null
VafaKnm
[]
### Describe the bug I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this: ``` def prepare_dataset(batch): input_features = processor(batch["audio"], sampling_rate=16000).input_features[0] input_length = len(input_features) labels = processor.tokenizer(batch["text"], padding=False).input_ids batch = { "input_features": [input_features], "input_length": [input_length], "labels": [labels] } return batch train_ds.set_transform(prepare_dataset) val_ds.set_transform(prepare_dataset) ``` After this, I also had to change the DataCollatorCTCWithPadding class like this: ``` @dataclass class DataCollatorCTCWithPadding: processor: Wav2Vec2BertProcessor padding: Union[bool, str] = True def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # Separate input_features and labels input_features = [{"input_features": feature["input_features"][0]} for feature in features] labels = [feature["labels"][0] for feature in features] # Pad input features batch = self.processor.pad( input_features, padding=self.padding, return_tensors="pt", ) # Pad and process labels label_features = self.processor.tokenizer.pad( {"input_ids": labels}, padding=self.padding, return_tensors="pt", ) labels = label_features["input_ids"] attention_mask = label_features["attention_mask"] # Replace padding with -100 to ignore these tokens during loss calculation labels = labels.masked_fill(attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake? ### Steps to reproduce the bug i can share my code if needed ### Expected behavior Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch. ### Environment info all updated versions
false
2,429,281,339
https://api.github.com/repos/huggingface/datasets/issues/7069
https://github.com/huggingface/datasets/pull/7069
7,069
Fix push_to_hub by not calling create_branch if PR branch
closed
8
2024-07-25T07:50:04
2024-07-31T07:10:07
2024-07-30T10:51:01
albertvillanova
[]
Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`). Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`). EDIT: ~~Fix push_to_hub by not calling create_branch if branch exists.~~ Note that currently create_branch raises a 403 Forbidden error even if all these conditions are met: - exist_ok is passed - the branch already exists - the user does not have WRITE permission Fix #7067. Related issue: - https://github.com/huggingface/huggingface_hub/issues/2419
true
2,426,657,434
https://api.github.com/repos/huggingface/datasets/issues/7068
https://github.com/huggingface/datasets/pull/7068
7,068
Fix prepare_single_hop_path_and_storage_options
closed
2
2024-07-24T05:52:34
2024-07-29T07:02:07
2024-07-29T06:56:15
albertvillanova
[]
Fix `_prepare_single_hop_path_and_storage_options`: - Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs - Do not overwrite passed `storage_options` nested values: - Before, when passed ```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```, it was overwritten to ```{"https": {"client_kwargs": {"trust_env": True}}}``` - Now, the result combines both: ```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}```
true
2,425,460,168
https://api.github.com/repos/huggingface/datasets/issues/7067
https://github.com/huggingface/datasets/issues/7067
7,067
Convert_to_parquet fails for datasets with multiple configs
closed
3
2024-07-23T15:09:33
2024-07-30T10:51:02
2024-07-30T10:51:02
HuangZhen02
[]
If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error: ``` Traceback (most recent call last): File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main service.run() File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run dataset.push_to_hub( File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch hf_raise_for_status(response) File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status raise BadRequestError(message, response=response) from e huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f) Bad request: Invalid reference for a branch: refs/pr/1 ```
false
2,425,125,160
https://api.github.com/repos/huggingface/datasets/issues/7066
https://github.com/huggingface/datasets/issues/7066
7,066
One subset per file in repo ?
open
1
2024-07-23T12:43:59
2025-06-26T08:24:50
null
lhoestq
[]
Right now we consider all the files of a dataset to be the same data, e.g. ``` single_subset_dataset/ ├── train0.jsonl ├── train1.jsonl └── train2.jsonl ``` but in cases like this, each file is actually a different subset of the dataset and should be loaded separately ``` many_subsets_dataset/ ├── animals.jsonl ├── trees.jsonl └── metadata.jsonl ``` It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ?
false
2,424,734,953
https://api.github.com/repos/huggingface/datasets/issues/7065
https://github.com/huggingface/datasets/issues/7065
7,065
Cannot get item after loading from disk and then converting to iterable.
open
0
2024-07-23T09:37:56
2024-07-23T09:37:56
null
happyTonakai
[]
### Describe the bug The dataset generated from local file works fine. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` But after saving it to disk and then loading it from disk, I cannot get data as expected. ```py root = "/home/data/train" file_list1 = glob(os.path.join(root, "*part1.flac")) file_list2 = glob(os.path.join(root, "*part2.flac")) ds = ( Dataset.from_dict({"part1": file_list1, "part2": file_list2}) .cast_column("part1", Audio(sampling_rate=None, mono=False)) .cast_column("part2", Audio(sampling_rate=None, mono=False)) ) ds.save_to_disk("./train") ds = datasets.load_from_disk("./train") ids = ds.to_iterable_dataset(128) ids = ids.shuffle(buffer_size=10000, seed=42) dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True) for batch in dataloader: break ``` After a long time waiting, an error occurs: ``` Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s] Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data data = self._data_queue.get(timeout=timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get if not self._poll(timeout): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll return self._poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll r = wait([self], timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait ready = selector.select(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select fd_event_list = self._selector.poll(timeout) File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module> cli.main() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main run() File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file runpy.run_path(target, run_name="__main__") File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path return _run_module_code(code, init_globals, run_name, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code exec(code, run_globals) File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module> for batch in dataloader: File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__ data = self._next_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data idx, data = self._get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data success, data = self._try_get_data() File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly ``` It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable? ### Steps to reproduce the bug 1. Create a `Dataset` from local files with `from_dict` 2. Save it to disk with `save_to_disk` 3. Load it from disk with `load_from_disk` 4. Convert to iterable with `to_iterable_dataset` 5. Loop the dataset ### Expected behavior Get items faster than the original dataset generated from dict. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.23.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,424,613,104
https://api.github.com/repos/huggingface/datasets/issues/7064
https://github.com/huggingface/datasets/pull/7064
7,064
Add `batch` method to `Dataset` class
closed
6
2024-07-23T08:40:43
2024-07-25T13:51:25
2024-07-25T13:45:20
lappemic
[]
This PR introduces a new `batch` method to the `Dataset` class, aligning its functionality with the `IterableDataset.batch()` method (implemented in #7054). The implementation uses as well the existing `map` method for efficient batching of examples. Key changes: - Add `batch` method to `Dataset` class in `arrow_dataset.py` - Utilize `map` method for batching Closes #7063 Once the approach is approved, i will create the tests and update the documentation.
true
2,424,488,648
https://api.github.com/repos/huggingface/datasets/issues/7063
https://github.com/huggingface/datasets/issues/7063
7,063
Add `batch` method to `Dataset`
closed
0
2024-07-23T07:36:59
2024-07-25T13:45:21
2024-07-25T13:45:21
lappemic
[ "enhancement" ]
### Feature request Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054. ### Motivation A batched iteration speeds up data loading significantly (see e.g. #6279) ### Your contribution I plan to open a PR to implement this.
false
2,424,467,484
https://api.github.com/repos/huggingface/datasets/issues/7062
https://github.com/huggingface/datasets/pull/7062
7,062
Avoid calling http_head for non-HTTP URLs
closed
2
2024-07-23T07:25:09
2024-07-23T14:28:27
2024-07-23T14:21:08
albertvillanova
[]
Avoid calling `http_head` for non-HTTP URLs, by adding and `else` statement. Currently, it makes an unnecessary HTTP call (which adds latency) for non-HTTP protocols, like FTP, S3,... I discovered this while working in an unrelated issue.
true
2,423,786,881
https://api.github.com/repos/huggingface/datasets/issues/7061
https://github.com/huggingface/datasets/issues/7061
7,061
Custom Dataset | Still Raise Error while handling errors in _generate_examples
open
0
2024-07-22T21:18:12
2024-09-09T14:48:07
null
hahmad2008
[]
### Describe the bug I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution. ``` def _generate_examples(self, filepaths): errors=[] id_ = 0 for filepath in filepaths: try: with open(filepath, 'r') as f: for line in f: json_obj = json.loads(line) yield id_, json_obj id_ += 1 except Exception as exc: logger.error(f"error occur at filepath: {filepath}") errors.append(error) ``` seems the logger.error is printed but still exception is raised the the run is exit. ``` Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841 ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl Traceback (most recent call last): File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples json_obj = json.loads(line) File "myenv/lib/python3.8/json/__init__.py", line 357, in loads return _default_decoder.decode(s) File "myenv/lib/python3.8/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3) Generating train split: 0 examples [00:06, ? examples/s]> RemoteTraceback: """ Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single num_examples, num_bytes = writer.finalize() File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset """ The above exception was the direct cause of the following exception: │ │ │ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │ │ py:1377 in <listcomp> │ │ │ │ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │ │ 1375 │ │ │ │ │ break │ │ 1376 │ │ # we get the result in case there's an error to raise │ │ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │ │ 1378 │ │ │ │ ╭──────────────────────────────── locals ─────────────────────────────────╮ │ │ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │ │ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ ╰─────────────────────────────────────────────────────────────────────────╯ │ │ │ │ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │ │ in get │ │ │ │ 768 │ │ if self._success: │ │ 769 │ │ │ return self._value │ │ 770 │ │ else: │ │ ❱ 771 │ │ │ raise self._value │ │ 772 │ │ │ 773 │ def _set(self, i, obj): │ │ 774 │ │ self._success, self._value = obj │ │ │ │ ╭────────────────────────────── locals ──────────────────────────────╮ │ │ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │ │ │ timeout = None │ │ │ ╰────────────────────────────────────────────────────────────────────╯ │ DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug same as above ### Expected behavior should handle error and continue reading remaining files ### Environment info python 3.9
false
2,423,188,419
https://api.github.com/repos/huggingface/datasets/issues/7060
https://github.com/huggingface/datasets/pull/7060
7,060
WebDataset BuilderConfig
closed
1
2024-07-22T15:41:07
2024-07-23T13:28:44
2024-07-23T13:28:44
hlky
[]
This PR adds `WebDatasetConfig`. Closes #7055
true
2,422,827,892
https://api.github.com/repos/huggingface/datasets/issues/7059
https://github.com/huggingface/datasets/issues/7059
7,059
None values are skipped when reading jsonl in subobjects
open
0
2024-07-22T13:02:42
2024-07-22T13:02:53
null
PonteIneptique
[]
### Describe the bug I have been fighting against my machine since this morning only to find out this is some kind of a bug. When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around. E.g., let's take this example Here are two version of a same dataset: [not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz) [buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz) ### Steps to reproduce the bug 1. Load the `buggy.tar.gz` dataset 2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` 3. Load the `not-buggy.tar.gz` dataset 4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]` ### Expected behavior Both should have 4 baseline entries: 1. Buggy should have None followed by three lists 2. Non-Buggy should have four lists, and the first one should be an empty list. One does not work, 2 works. Despite accepting None in another position than the first one. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.0 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
false
2,422,560,355
https://api.github.com/repos/huggingface/datasets/issues/7058
https://github.com/huggingface/datasets/issues/7058
7,058
New feature type: Document
open
0
2024-07-22T10:49:20
2024-07-22T10:49:20
null
severo
[]
It would be useful for PDF. https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069
false
2,422,498,520
https://api.github.com/repos/huggingface/datasets/issues/7057
https://github.com/huggingface/datasets/pull/7057
7,057
Update load_hub.mdx
closed
2
2024-07-22T10:17:46
2024-07-22T10:34:14
2024-07-22T10:28:10
severo
[]
null
true
2,422,192,257
https://api.github.com/repos/huggingface/datasets/issues/7056
https://github.com/huggingface/datasets/pull/7056
7,056
Make `BufferShuffledExamplesIterable` resumable
closed
8
2024-07-22T07:50:02
2025-01-31T05:34:20
2025-01-31T05:34:19
yzhangcs
[]
This PR aims to implement a resumable `BufferShuffledExamplesIterable`. Instead of saving the entire buffer content, which is very memory-intensive, the newly implemented `BufferShuffledExamplesIterable` saves only the minimal state necessary for recovery, e.g., the random generator states and the state of the first example in the buffer dict. The idea is that since the buffer size is limited, even if the entire buffer is discarded, we can rebuild it as long as the state of the oldest example is recorded. For buffer size $B$, the expected distance between when an example is pushed and when it is yielded is $d = \sum_{k=1}^{\infty} k\frac{1}{B} (1 - \frac{1}{B} )^{k-1} =B$. Simulation experiments support these claims: ```py from random import randint BUFFER_SIZE = 1024 dists = [] buffer = [] for i in range(10000000): if i < BUFFER_SIZE: buffer.append(i) else: index = randint(0, BUFFER_SIZE - 1) dists.append(i - buffer[index]) buffer[index] = i print(f"MIN DIST: {min(dists)}\nMAX DIST: {max(dists)}\nAVG DIST: {sum(dists) / len(dists):.2f}\n") ``` which produces the following output: ```py MIN DIST: 1 MAX DIST: 15136 AVG DIST: 1023.95 ``` The overall time for reconstructing the buffer and recovery should not be too long. The following code mimics the cases of resuming online tokenization by `datasets` and `StatefulDataLoader` under distributed scenarios, ```py import pickle import time from itertools import chain from typing import Any, Dict, List import torch from datasets import load_dataset from torchdata.stateful_dataloader import StatefulDataLoader from tqdm import tqdm from transformers import AutoTokenizer, DataCollatorForLanguageModeling tokenizer = AutoTokenizer.from_pretrained('fla-hub/gla-1.3B-100B') tokenizer.pad_token = tokenizer.eos_token data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) torch.manual_seed(42) def tokenize(examples: Dict[str, List[Any]]) -> Dict[str, List[List[int]]]: input_ids = tokenizer(examples['text'])['input_ids'] input_ids = list(chain(*input_ids)) total_length = len(input_ids) chunk_size = 2048 total_length = (total_length // chunk_size) * chunk_size # the last chunk smaller than chunk_size will be discarded return {'input_ids': [input_ids[i: i+chunk_size] for i in range(0, total_length, chunk_size)]} batch_size = 16 num_workers = 5 context_length = 2048 rank = 1 world_size = 32 prefetch_factor = 2 steps = 2048 path = 'fla-hub/slimpajama-test' dataset = load_dataset( path=path, split='train', streaming=True, trust_remote_code=True ) dataset = dataset.map(tokenize, batched=True, remove_columns=next(iter(dataset)).keys()) dataset = dataset.shuffle(seed=42) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) start = time.time() for i, batch in tqdm(enumerate(loader)): if i == 0: print(f'{i}\n{batch["input_ids"]}') if i == steps - 1: print(f'{i}\n{batch["input_ids"]}') state_dict = loader.state_dict() if i == steps: print(f'{i}\n{batch["input_ids"]}') break print(f"{time.time() - start:.2f}s elapsed") print(f"{len(pickle.dumps(state_dict)) / 1024**2:.2f}MB states in total") for worker in state_dict['_snapshot']['_worker_snapshots'].keys(): print(f"{worker} {len(pickle.dumps(state_dict['_snapshot']['_worker_snapshots'][worker])) / 1024**2:.2f}MB") print(state_dict['_snapshot']['_worker_snapshots']['worker_0']['dataset_state']) loader = StatefulDataLoader(dataset=dataset, batch_size=batch_size, collate_fn=data_collator, num_workers=num_workers, persistent_workers=False, prefetch_factor=prefetch_factor) print("Loading state dict") loader.load_state_dict(state_dict) start = time.time() for batch in loader: print(batch['input_ids']) break print(f"{time.time() - start:.2f}s elapsed") ``` and the outputs are ```py 0 tensor([[ 909, 395, 19082, ..., 13088, 16232, 395], [ 601, 28705, 28770, ..., 28733, 923, 288], [21753, 15071, 13977, ..., 9369, 28723, 415], ..., [21763, 28751, 20300, ..., 28781, 28734, 4775], [ 354, 396, 10214, ..., 298, 429, 28770], [ 333, 6149, 28768, ..., 2773, 340, 351]]) 2047 tensor([[28723, 415, 3889, ..., 272, 3065, 2609], [ 403, 3214, 3629, ..., 403, 21163, 16434], [28723, 13, 28749, ..., 28705, 28750, 28734], ..., [ 2778, 2251, 28723, ..., 354, 684, 429], [ 5659, 298, 1038, ..., 5290, 297, 22153], [ 938, 28723, 1537, ..., 9123, 28733, 12154]]) 2048 tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 65.97s elapsed 0.00MB states in total worker_0 0.00MB worker_1 0.00MB worker_2 0.00MB worker_3 0.00MB worker_4 0.00MB {'ex_iterable': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 14000}, 'num_examples_since_previous_state': 166, 'previous_state_example_idx': 7394, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 13000}}, 'num_taken': 6560, 'global_example_idx': 7560, 'buffer_state_dict': {'num_taken': 6560, 'global_example_idx': 356, 'index_offset': 0, 'first_state': {'ex_iterable': {'shard_idx': 0, 'shard_example_idx': 1000}, 'num_examples_since_previous_state': 356, 'previous_state_example_idx': 0, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0}}, 'bit_generator_state': {'state': {'state': 274674114334540486603088602300644985544, 'inc': 332724090758049132448979897138935081983}, 'bit_generator': 'PCG64', 'has_uint32': 0, 'uinteger': 0}}} Loading state dict tensor([[ 769, 278, 12531, ..., 28721, 19309, 28739], [ 415, 23347, 622, ..., 3937, 2426, 28725], [28745, 4345, 28723, ..., 338, 28725, 583], ..., [ 1670, 28709, 5809, ..., 28734, 28760, 393], [ 340, 1277, 624, ..., 325, 28790, 1329], [ 523, 1144, 3409, ..., 359, 359, 17422]]) 24.60s elapsed ``` Not sure if this PR complies with the `datasets` code style. Looking for your help @lhoestq, also very willing to further improve the code if any suggestions are given.
true
2,421,708,891
https://api.github.com/repos/huggingface/datasets/issues/7055
https://github.com/huggingface/datasets/issues/7055
7,055
WebDataset with different prefixes are unsupported
closed
8
2024-07-22T01:14:19
2024-07-24T13:26:30
2024-07-23T13:28:46
hlky
[]
### Describe the bug Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k) Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given. ``` The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. ``` The purpose of this check is unclear because PyArrow supports different keys. Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset. ``` >>> from datasets import load_dataset >>> path = "shards/*.tar" >>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True) Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s] >>> dataset IterableDataset({ features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'], n_shards: 152 }) ``` ### Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("bigdata-pw/fashion-150k") ``` ### Expected behavior Dataset loads without error ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.19 - `huggingface_hub` version: 0.23.4 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,418,548,995
https://api.github.com/repos/huggingface/datasets/issues/7054
https://github.com/huggingface/datasets/pull/7054
7,054
Add batching to `IterableDataset`
closed
5
2024-07-19T10:11:47
2024-07-23T13:25:13
2024-07-23T10:34:28
lappemic
[]
I've taken a try at implementing a batched `IterableDataset` as requested in issue #6279. This PR adds a new `BatchedExamplesIterable` class and a `.batch()` method to the `IterableDataset` class. The main changes are: 1. A new `BatchedExamplesIterable` that groups examples into batches. 2. A `.batch()` method for `IterableDataset` to easily create batched versions. 3. Support for shuffling and sharding to work with PyTorch DataLoader and multiple workers. I'm not sure if this is exactly what you had in mind and also have not fully tested it atm, so I'd really appreciate your feedback. Does this seem like it's heading in the right direction? I'm happy to make any changes or explore different approaches if needed. Pinging @lhoestq
true
2,416,423,791
https://api.github.com/repos/huggingface/datasets/issues/7053
https://github.com/huggingface/datasets/issues/7053
7,053
Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple`
closed
2
2024-07-18T13:42:35
2024-07-18T15:17:42
2024-07-18T15:16:18
MatthewYZhang
[]
### Describe the bug in data_files.py, line 332, `fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)` If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')` So, `isinstance(fs.protocol, str) == False` and `protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise `TypeError: can only concatenate tuple (not "str") to tuple`. ### Steps to reproduce the bug Steps to reproduce: 1. Run on a cloud server like AWS, 2. `import datasets.data_files as datafile` 3. datafile.resolve_pattern('path/to/dataset', '.') 4. `TypeError: can only concatenate tuple (not "str") to tuple` ### Expected behavior Should return path of the dataset, with fs.protocol at the beginning ### Environment info - `datasets` version: 2.14.0 - Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.19 - Huggingface_hub version: 0.23.5 - PyArrow version: 16.1.0 - Pandas version: 1.1.5
false
2,411,682,730
https://api.github.com/repos/huggingface/datasets/issues/7052
https://github.com/huggingface/datasets/pull/7052
7,052
Adding `Music` feature for symbolic music modality (MIDI, abc)
closed
0
2024-07-16T17:26:04
2024-07-29T06:47:55
2024-07-29T06:47:55
Natooz
[]
⚠️ (WIP) ⚠️ ### What this PR does This PR adds a `Music` feature for the symbolic music modality, in particular [MIDI](https://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface) and [abc](https://en.wikipedia.org/wiki/ABC_notation) files. ### Motivations These two file formats are widely used in the [Music Information Retrieval (MIR)](https://en.wikipedia.org/wiki/Music_information_retrieval) for tasks such as music generation, music transcription, music synthesis or music transcription. Having a dedicated feature in the datasets library would allow to both encourage researchers to share datasets of this modality as well as making them more easily usable for end users, benefitting from the perks of the library. These file formats are supported by [symusic](https://github.com/Yikai-Liao/symusic), a lightweight Python library with C bindings (using nanobind) allowing to efficiently read, write and manipulate them. The library is actively developed, and can in the future also implement other file formats such as [musicXML](https://en.wikipedia.org/wiki/MusicXML). As such, this PR relies on it. The music data can then easily be tokenized with appropriate tokenizers such as [MidiTok](https://github.com/Natooz/MidiTok) or converted to pianorolls matrices by symusic. **Jul 16th 2024:** * the tests for the `Music` feature are currently failing due to non-supported access to the LazyBatch in `test_dataset_with_music_feature_map` and `test_dataset_with_music_feature_map_resample_music` (see TODOs). I am a beginner with pyArrow, I'll take any advice to make this work; * additional tests including the `Music` feature with parquet and WebDataset should be implemented. As of right now, I am waiting for your feedback before taking further steps; * a `MusicFolder` should also be implemented to comply with the usages of the `Image` and `Audio` features, waiting for your feedback too. CCing @lhoestq and @albertvillanova
true
2,409,353,929
https://api.github.com/repos/huggingface/datasets/issues/7051
https://github.com/huggingface/datasets/issues/7051
7,051
How to set_epoch with interleave_datasets?
closed
7
2024-07-15T18:24:52
2024-08-05T20:58:04
2024-08-05T20:58:04
jonathanasdf
[]
Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples. I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch) Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start. How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset... Something like ``` dataset_a = load_dataset(...) dataset_b = load_dataset(...) def epoch_shuffled_dataset(ds): # How to make this maintain the number of shards in ds?? for epoch in itertools.count(): ds.set_epoch(epoch) yield from iter(ds) shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a}) interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted') ```
false
2,409,048,733
https://api.github.com/repos/huggingface/datasets/issues/7050
https://github.com/huggingface/datasets/pull/7050
7,050
add checkpoint and resume title in docs
closed
2
2024-07-15T15:38:04
2024-07-15T16:06:15
2024-07-15T15:59:56
lhoestq
[]
(minor) just to make it more prominent in the docs page for the soon-to-be-released new torchdata
true
2,408,514,366
https://api.github.com/repos/huggingface/datasets/issues/7049
https://github.com/huggingface/datasets/issues/7049
7,049
Save nparray as list
closed
5
2024-07-15T11:36:11
2024-07-18T11:33:34
2024-07-18T11:33:34
Sakurakdx
[]
### Describe the bug When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision? ### Steps to reproduce the bug the map function ```python def convert_image_to_features(inst, processor, image_dir): image_file = inst["image_url"] file = image_file.split("/")[-1] image_path = os.path.join(image_dir, file) image = Image.open(image_path) image = image.convert("RGBA") inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"] return inst ``` main function ```python map_fun = partial( convert_image_to_features, processor=processor, image_dir=image_dir ) ds = ds.map(map_fun, batched=False, num_proc=20) print(type(ds[0]["pixel_values"]) ``` ### Expected behavior (type < list>) ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.23.4 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
false
2,408,487,547
https://api.github.com/repos/huggingface/datasets/issues/7048
https://github.com/huggingface/datasets/issues/7048
7,048
ImportError: numpy.core.multiarray when using `filter`
closed
4
2024-07-15T11:21:04
2024-07-16T10:11:25
2024-07-16T10:11:25
kamilakesbi
[]
### Describe the bug I can't apply the filter method on my dataset. ### Steps to reproduce the bug The following snippet generates a bug: ```python from datasets import load_dataset ami = load_dataset('kamilakesbi/ami', 'ihm') ami['train'].filter( lambda example: example["file_name"] == 'EN2001a' ) ``` I get the following error: `ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).` ### Expected behavior It should work properly! ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,406,495,084
https://api.github.com/repos/huggingface/datasets/issues/7047
https://github.com/huggingface/datasets/issues/7047
7,047
Save Dataset as Sharded Parquet
open
2
2024-07-12T23:47:51
2024-07-17T12:07:08
null
tom-p-reichel
[ "enhancement" ]
### Feature request `to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically. ### Motivation This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet. ### Your contribution I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158 to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle.
false
2,405,485,582
https://api.github.com/repos/huggingface/datasets/issues/7046
https://github.com/huggingface/datasets/pull/7046
7,046
Support librosa and numpy 2.0 for Python 3.10
closed
2
2024-07-12T12:42:47
2024-07-12T13:04:40
2024-07-12T12:58:17
albertvillanova
[]
Support librosa and numpy 2.0 for Python 3.10 by installing soxr 0.4.0b1 pre-release: - https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0b1 - https://github.com/dofuuz/python-soxr/issues/28
true
2,405,447,858
https://api.github.com/repos/huggingface/datasets/issues/7045
https://github.com/huggingface/datasets/pull/7045
7,045
Fix tensorflow min version depending on Python version
closed
2
2024-07-12T12:20:23
2024-07-12T12:38:53
2024-07-12T12:33:00
albertvillanova
[]
Fix tensorflow min version depending on Python version. Related to: - #6991
true
2,405,002,987
https://api.github.com/repos/huggingface/datasets/issues/7044
https://github.com/huggingface/datasets/pull/7044
7,044
Mark tests that require librosa
closed
2
2024-07-12T08:06:59
2024-07-12T09:06:32
2024-07-12T09:00:09
albertvillanova
[]
Mark tests that require `librosa`. Note that `librosa` is an optional dependency (installed with `audio` option) and we should be able to test environments without that library installed. This is the case if we want to test Numpy 2.0, which is currently incompatible with `librosa` due to its dependency on `soxr`: - https://github.com/dofuuz/python-soxr/issues/28
true
2,404,951,714
https://api.github.com/repos/huggingface/datasets/issues/7043
https://github.com/huggingface/datasets/pull/7043
7,043
Add decorator as explicit test dependency
closed
2
2024-07-12T07:35:23
2024-07-12T08:12:55
2024-07-12T08:07:10
albertvillanova
[]
Add decorator as explicit test dependency. We use `decorator` library in our CI test since PR: - #4845 However we did not add it as an explicit test requirement, and we depended on it indirectly through other libraries' dependencies. I discovered this while testing Numpy 2.0 and removing incompatible libraries.
true
2,404,605,836
https://api.github.com/repos/huggingface/datasets/issues/7042
https://github.com/huggingface/datasets/pull/7042
7,042
Improved the tutorial by adding a link for loading datasets
closed
1
2024-07-12T03:49:54
2024-08-15T10:07:44
2024-08-15T10:01:59
AmboThom
[]
Improved the tutorial by letting readers know about loading datasets with common files and including a link. I left the local files section alone because the methods were already listed with code snippets.
true
2,404,576,038
https://api.github.com/repos/huggingface/datasets/issues/7041
https://github.com/huggingface/datasets/issues/7041
7,041
`sort` after `filter` unreasonably slow
closed
2
2024-07-12T03:29:27
2025-04-29T09:49:25
2025-04-29T09:49:25
Tobin-rgb
[]
### Describe the bug as the tittle says ... ### Steps to reproduce the bug `sort` seems to be normal. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) print("start sort") ds = ds.sort("k") print("finish sort") ``` but `sort` after `filter` is extremely slow. ```python from datasets import Dataset import random nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)] ds = Dataset.from_list(nums) ds = ds.filter(lambda x:x > 100, input_columns="k") print("start sort") ds = ds.sort("k") print("finish sort") ``` ### Expected behavior Is this a bug, or is it a misuse of the `sort` function? ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
false
2,402,918,335
https://api.github.com/repos/huggingface/datasets/issues/7040
https://github.com/huggingface/datasets/issues/7040
7,040
load `streaming=True` dataset with downloaded cache
open
2
2024-07-11T11:14:13
2024-07-11T14:11:56
null
wanghaoyucn
[]
### Describe the bug We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below: ```python def _generate_examples(self, filepath, split): for file in filepath: with fsspec.open(file, "rb") as fs: with h5py.File(fs, "r") as fp: # for event_id in sorted(list(fp.keys())): event_ids = list(fp.keys()) ...... ``` ### Steps to reproduce the bug The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples. ### Expected behavior So does the following make sense so far? 1. download the files ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True) ``` 2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`) ```python dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true) ``` I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution. ### Environment info - `datasets` = 2.18.0 - `h5py` = 3.10.0 - `fsspec` = 2023.10.0
false
2,402,403,390
https://api.github.com/repos/huggingface/datasets/issues/7039
https://github.com/huggingface/datasets/pull/7039
7,039
Fix export to JSON when dataset larger than batch size
open
3
2024-07-11T06:52:22
2024-09-28T06:10:00
null
albertvillanova
[]
Fix export to JSON (`lines=False`) when dataset larger than batch size. Fix #7037.
true
2,400,192,419
https://api.github.com/repos/huggingface/datasets/issues/7037
https://github.com/huggingface/datasets/issues/7037
7,037
A bug of Dataset.to_json() function
open
2
2024-07-10T09:11:22
2024-09-22T13:16:07
null
LinglingGreat
[ "bug" ]
### Describe the bug When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again. The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size). ### Steps to reproduce the bug try this code: ```python from datasets import load_dataset import json train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"] output_path = "./harmless-base_hftojs.json" print(len(train_dataset)) train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2) with open(output_path, encoding="utf-8") as f: data = json.loads(f.read()) ``` it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709) Extra square brackets have appeared here: <img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc"> ### Expected behavior The code runs normally. ### Environment info datasets=2.20.0
false
2,400,035,672
https://api.github.com/repos/huggingface/datasets/issues/7036
https://github.com/huggingface/datasets/pull/7036
7,036
Fix doc generation when NamedSplit is used as parameter default value
closed
2
2024-07-10T07:58:46
2024-07-26T07:58:00
2024-07-26T07:51:52
albertvillanova
[]
Fix doc generation when `NamedSplit` is used as parameter default value. Fix #7035.
true
2,400,021,225
https://api.github.com/repos/huggingface/datasets/issues/7035
https://github.com/huggingface/datasets/issues/7035
7,035
Docs are not generated when a parameter defaults to a NamedSplit value
closed
0
2024-07-10T07:51:24
2024-07-26T07:51:53
2024-07-26T07:51:53
albertvillanova
[ "maintenance" ]
While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like: ```python def call_function(split=Split.TRAIN): ... ``` The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'> See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015 ``` Building the MDX files: 97%|█████████▋| 58/60 [00:00<00:00, 91.94it/s] Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files content, new_anchors, source_files, errors = resolve_autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc doc = autodoc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc method_doc, check = document_object( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object signature = format_signature(obj) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature if param.default != inspect._empty: File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__ return not self.__eq__(other) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__ raise ValueError(f"Equality not supported between split {self} and {other}") ValueError: Equality not supported between split train and <class 'inspect._empty'> The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module> sys.exit(main()) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main args.func(args) File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command build_doc( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc anchors_mapping, source_files_mapping = build_mdx_files( File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format. Equality not supported between split train and <class 'inspect._empty'> ```
false
2,397,525,974
https://api.github.com/repos/huggingface/datasets/issues/7034
https://github.com/huggingface/datasets/pull/7034
7,034
chore: fix typos in docs
closed
1
2024-07-09T08:35:05
2024-08-13T08:22:25
2024-08-13T08:16:22
hattizai
[]
null
true
2,397,419,768
https://api.github.com/repos/huggingface/datasets/issues/7033
https://github.com/huggingface/datasets/issues/7033
7,033
`from_generator` does not allow to specify the split name
closed
2
2024-07-09T07:47:58
2024-07-26T12:56:16
2024-07-26T09:31:56
pminervini
[]
### Describe the bug I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:` It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py ### Steps to reproduce the bug ``` In [1]: from datasets import Dataset In [2]: def gen(): ...: yield {"pokemon": "bulbasaur", "type": "grass"} ...: In [3]: ds = Dataset.from_generator(gen) Generating train split: 1 examples [00:00, 133.89 examples/s] ``` ### Expected behavior It should be possible to specify any split name ### Environment info - `datasets` version: 2.19.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - `huggingface_hub` version: 0.23.3 - PyArrow version: 15.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
false
2,395,531,699
https://api.github.com/repos/huggingface/datasets/issues/7032
https://github.com/huggingface/datasets/pull/7032
7,032
Register `.zstd` extension for zstd-compressed files
closed
8
2024-07-08T12:39:50
2024-07-12T15:07:03
2024-07-12T15:07:03
polinaeterna
[]
For example, https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 dataset files have `.zstd` extension which is currently ignored (only `.zst` is registered).
true
2,395,401,692
https://api.github.com/repos/huggingface/datasets/issues/7031
https://github.com/huggingface/datasets/issues/7031
7,031
CI quality is broken: use ruff check instead
closed
0
2024-07-08T11:42:24
2024-07-08T11:47:29
2024-07-08T11:47:29
albertvillanova
[]
CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027 ``` error: `ruff <path>` has been removed. Use `ruff check <path>` instead. ```
false
2,393,411,631
https://api.github.com/repos/huggingface/datasets/issues/7030
https://github.com/huggingface/datasets/issues/7030
7,030
Add option to disable progress bar when reading a dataset ("Loading dataset from disk")
closed
2
2024-07-06T05:43:37
2024-07-13T14:35:59
2024-07-13T14:35:59
yuvalkirstain
[ "enhancement" ]
### Feature request Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16. ### Motivation I am reading a lot of datasets that it creates lots of logs. <img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a"> ### Your contribution Seems like an easy fix to make. I can create a PR if necessary.
false
2,391,366,696
https://api.github.com/repos/huggingface/datasets/issues/7029
https://github.com/huggingface/datasets/issues/7029
7,029
load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error
open
1
2024-07-04T19:15:16
2024-07-17T12:44:03
null
sugam-nexusflow
[]
### Describe the bug I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory. ### Steps to reproduce the bug ```python d = load_dataset( path=hugging_face_link, split=split, token=token, cache_dir="/tmp/hugging_face_cache", ) ``` ### Expected behavior Everything written to the file system as part of the load_datasets function should be in the /tmp directory. ### Environment info datasets version: 2.16.1 Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26 Python version: 3.11.9 huggingface_hub version: 0.19.4 PyArrow version: 16.1.0 Pandas version: 2.2.2 fsspec version: 2023.10.0
false
2,391,077,531
https://api.github.com/repos/huggingface/datasets/issues/7028
https://github.com/huggingface/datasets/pull/7028
7,028
Fix ci
closed
2
2024-07-04T15:11:08
2024-07-04T15:26:35
2024-07-04T15:19:16
lhoestq
[]
...after last pr errors
true
2,391,013,330
https://api.github.com/repos/huggingface/datasets/issues/7027
https://github.com/huggingface/datasets/pull/7027
7,027
Missing line from previous pr
closed
2
2024-07-04T14:34:29
2024-07-04T14:40:46
2024-07-04T14:34:36
lhoestq
[]
null
true
2,390,983,889
https://api.github.com/repos/huggingface/datasets/issues/7026
https://github.com/huggingface/datasets/pull/7026
7,026
Fix check_library_imports
closed
2
2024-07-04T14:18:38
2024-07-04T14:28:36
2024-07-04T14:20:02
lhoestq
[]
move it to after the `trust_remote_code` check Note that it only affects local datasets that already exist on disk, not datasets loaded from HF directly
true
2,390,488,546
https://api.github.com/repos/huggingface/datasets/issues/7025
https://github.com/huggingface/datasets/pull/7025
7,025
feat: support non streamable arrow file binary format
closed
7
2024-07-04T10:11:12
2024-07-31T06:15:50
2024-07-31T06:09:31
kmehant
[]
Support Arrow files (`.arrow`) that are in non streamable binary file formats.
true
2,390,141,626
https://api.github.com/repos/huggingface/datasets/issues/7024
https://github.com/huggingface/datasets/issues/7024
7,024
Streaming dataset not returning data
open
0
2024-07-04T07:21:47
2024-07-04T07:21:47
null
johnwee1
[]
### Describe the bug I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly. I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset. However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`. Coud this be some sort of network / firewall issue I'm facing? ### Steps to reproduce the bug I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551 Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle) ``` commitpackft = load_dataset( "chargoddard/commitpack-ft-instruct", split="train", streaming=True ).filter(lambda example: example["language"] == "Python") def form_template(example): """Forms a template for each example following the alpaca format for CommitPack""" example["content"] = ( "### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"] ) return example dataset = commitpackft.map( form_template, remove_columns=["id", "language", "license", "instruction", "input", "output"], ).shuffle( seed=42, buffer_size=10000 ) # remove everything since its all inside "content" now validation_data = dataset.take(4000) train_data = dataset.skip(4000) ``` The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation. ### Expected behavior The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31 - Python version: 3.11.7 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,388,090,424
https://api.github.com/repos/huggingface/datasets/issues/7023
https://github.com/huggingface/datasets/pull/7023
7,023
Remove dead code for pyarrow < 15.0.0
closed
2
2024-07-03T09:05:03
2024-07-03T09:24:46
2024-07-03T09:17:35
albertvillanova
[]
Remove dead code for pyarrow < 15.0.0. Code is dead since the merge of: - #6892 Fix #7022.
true
2,388,064,650
https://api.github.com/repos/huggingface/datasets/issues/7022
https://github.com/huggingface/datasets/issues/7022
7,022
There is dead code after we require pyarrow >= 15.0.0
closed
0
2024-07-03T08:52:57
2024-07-03T09:17:36
2024-07-03T09:17:36
albertvillanova
[ "maintenance" ]
There are code lines specific for pyarrow versions < 15.0.0. However, we require pyarrow >= 15.0.0 since the merge of PR: - #6892 Those code lines are now dead code and should be removed.
false
2,387,948,935
https://api.github.com/repos/huggingface/datasets/issues/7021
https://github.com/huggingface/datasets/pull/7021
7,021
Fix casting list array to fixed size list
closed
2
2024-07-03T07:58:57
2024-07-03T08:47:49
2024-07-03T08:41:55
albertvillanova
[]
Fix casting list array to fixed size list. This bug was introduced in [datasets-2.17.0](https://github.com/huggingface/datasets/releases/tag/2.17.0) by PR: https://github.com/huggingface/datasets/pull/6283/files#diff-1cb2b66aa9311d729cfd83013dad56cf5afcda35b39dfd0bfe9c3813a049eab0R1899 - #6283 Fix #7020.
true
2,387,940,990
https://api.github.com/repos/huggingface/datasets/issues/7020
https://github.com/huggingface/datasets/issues/7020
7,020
Casting list array to fixed size list raises error
closed
0
2024-07-03T07:54:49
2024-07-03T08:41:56
2024-07-03T08:41:56
albertvillanova
[ "bug" ]
When trying to cast a list array to fixed size list, an AttributeError is raised: > AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' Steps to reproduce the bug: ```python import pyarrow as pa from datasets.table import array_cast arr = pa.array([[0, 1]]) array_cast(arr, pa.list_(pa.int64(), 2)) ``` Stack trace: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-6cb90a1d8216> in <module> 3 4 arr = pa.array([[0, 1]]) ----> 5 array_cast(arr, pa.list_(pa.int64(), 2)) ~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs) 1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1803 else: -> 1804 return func(array, *args, **kwargs) 1805 1806 return wrapper ~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str) 1920 else: 1921 array_values = array.values[ -> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length 1923 ] 1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size) AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length' ```
false
2,385,793,897
https://api.github.com/repos/huggingface/datasets/issues/7019
https://github.com/huggingface/datasets/pull/7019
7,019
Support pyarrow large_list
closed
10
2024-07-02T09:52:52
2024-08-12T14:49:45
2024-08-12T14:43:45
albertvillanova
[]
Allow Polars round trip by supporting pyarrow large list. Fix #6834, fix #6984. Supersede and close #4800, close #6835, close #6986.
true
2,383,700,286
https://api.github.com/repos/huggingface/datasets/issues/7018
https://github.com/huggingface/datasets/issues/7018
7,018
`load_dataset` fails to load dataset saved by `save_to_disk`
open
5
2024-07-01T12:19:19
2025-05-24T05:21:12
null
sliedes
[]
### Describe the bug This code fails to load the dataset it just saved: ```python from datasets import load_dataset from transformers import AutoTokenizer MODEL = "google-bert/bert-base-cased" tokenizer = AutoTokenizer.from_pretrained(MODEL) dataset = load_dataset("yelp_review_full") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = dataset.map(tokenize_function, batched=True) tokenized_datasets.save_to_disk("dataset") tokenized_datasets = load_dataset("dataset/") # raises ``` It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`. I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON: ```shell $ ls -l dataset/test -rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow -rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json -rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json ``` ### Steps to reproduce the bug Execute the code above. ### Expected behavior The dataset is loaded successfully. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
false
2,383,647,419
https://api.github.com/repos/huggingface/datasets/issues/7017
https://github.com/huggingface/datasets/pull/7017
7,017
Support fsspec 2024.6.1
closed
2
2024-07-01T11:57:15
2024-07-01T12:12:32
2024-07-01T12:06:24
albertvillanova
[]
Support fsspec 2024.6.1.
true
2,383,262,608
https://api.github.com/repos/huggingface/datasets/issues/7016
https://github.com/huggingface/datasets/issues/7016
7,016
`drop_duplicates` method
open
1
2024-07-01T09:01:06
2024-07-20T06:51:58
null
MohamedAliRashad
[ "duplicate", "enhancement" ]
### Feature request `drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one) ### Motivation Ease of use ### Your contribution I don't think i am good enough to help
false
2,383,151,220
https://api.github.com/repos/huggingface/datasets/issues/7015
https://github.com/huggingface/datasets/pull/7015
7,015
add split argument to Generator
closed
5
2024-07-01T08:09:25
2024-07-26T09:37:51
2024-07-26T09:31:56
piercus
[]
## Actual When creating a multi-split dataset using generators like ```python datasets.DatasetDict({ "val": datasets.Dataset.from_generator( generator=generator_val, features=features ), "test": datasets.Dataset.from_generator( generator=generator_test, features=features, ) }) ``` It displays (for both test and val) ``` Generating train split ``` ## Expected I would like to be able to improve this behavior by doing ```python datasets.DatasetDict({ "val": datasets.Dataset.from_generator( generator=generator_val, features=features, split="val" ), "test": datasets.Dataset.from_generator( generator=generator_test, features=features, split="test" ) }) ``` It would display ``` Generating val split ``` and ``` Generating test split ``` ## Proposal Current PR is adding an explicit `split` argument and replace the implicit "train" split in the following classes/function : * Generator * from_generator * AbstractDatasetInputStream * GeneratorDatasetInputStream Please share your feedbacks
true
2,382,985,847
https://api.github.com/repos/huggingface/datasets/issues/7014
https://github.com/huggingface/datasets/pull/7014
7,014
Skip faiss tests on Windows to avoid running CI for 360 minutes
closed
3
2024-07-01T06:45:35
2024-07-01T07:16:36
2024-07-01T07:10:27
albertvillanova
[]
Skip faiss tests on Windows to avoid running CI for 360 minutes. Fix #7013. Revert once the underlying issue is fixed.
true
2,382,976,738
https://api.github.com/repos/huggingface/datasets/issues/7013
https://github.com/huggingface/datasets/issues/7013
7,013
CI is broken for faiss tests on Windows: node down: Not properly terminated
closed
0
2024-07-01T06:40:03
2024-07-01T07:10:28
2024-07-01T07:10:28
albertvillanova
[ "maintenance" ]
Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached. See: https://github.com/huggingface/datasets/actions/runs/9712659783 ``` test (integration, windows-latest, deps-minimum) The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes. test (integration, windows-latest, deps-latest) The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes. ``` ``` ____________________________ tests/test_search.py _____________________________ [gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ____________________________ tests/test_search.py _____________________________ [gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index' ``` ``` tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw0] node down: Not properly terminated [gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw0 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw1] node down: Not properly terminated [gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw1 tests/test_search.py::IndexableDatasetTest::test_add_faiss_index [gw2] node down: Not properly terminated [gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index replacing crashed worker gw2 ```
false
2,380,934,047
https://api.github.com/repos/huggingface/datasets/issues/7012
https://github.com/huggingface/datasets/pull/7012
7,012
Raise an error when a nested object is expected to be a mapping that displays the object
closed
0
2024-06-28T18:10:59
2024-07-11T02:06:16
2024-07-11T02:06:16
sebbyjp
[]
null
true
2,379,785,262
https://api.github.com/repos/huggingface/datasets/issues/7011
https://github.com/huggingface/datasets/pull/7011
7,011
Re-enable raising error from huggingface-hub FutureWarning in CI
closed
2
2024-06-28T07:28:32
2024-06-28T12:25:25
2024-06-28T12:19:28
albertvillanova
[]
Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers - https://github.com/huggingface/transformers/pull/31007 was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0 Fix #7010.
true
2,379,777,480
https://api.github.com/repos/huggingface/datasets/issues/7010
https://github.com/huggingface/datasets/issues/7010
7,010
Re-enable raising error from huggingface-hub FutureWarning in CI
closed
0
2024-06-28T07:23:40
2024-06-28T12:19:30
2024-06-28T12:19:29
albertvillanova
[ "maintenance" ]
Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR: - #6876 Note that this can only be done once transformers releases the fix: - https://github.com/huggingface/transformers/pull/31007
false
2,379,619,132
https://api.github.com/repos/huggingface/datasets/issues/7009
https://github.com/huggingface/datasets/pull/7009
7,009
Support ruff 0.5.0 in CI
closed
2
2024-06-28T05:37:36
2024-06-28T07:17:26
2024-06-28T07:11:17
albertvillanova
[]
Support ruff 0.5.0 in CI and revert: - #7007 Fix #7008.
true
2,379,591,141
https://api.github.com/repos/huggingface/datasets/issues/7008
https://github.com/huggingface/datasets/issues/7008
7,008
Support ruff 0.5.0 in CI
closed
0
2024-06-28T05:11:26
2024-06-28T07:11:18
2024-06-28T07:11:18
albertvillanova
[ "maintenance" ]
Support ruff 0.5.0 in CI. Also revert: - #7007
false
2,379,588,676
https://api.github.com/repos/huggingface/datasets/issues/7007
https://github.com/huggingface/datasets/pull/7007
7,007
Fix CI by temporarily pinning ruff < 0.5.0
closed
2
2024-06-28T05:09:17
2024-06-28T05:31:21
2024-06-28T05:25:17
albertvillanova
[]
As a hotfix for CI, temporarily pin ruff upper version < 0.5.0. Fix #7006. Revert once root cause is fixed.
true
2,379,581,543
https://api.github.com/repos/huggingface/datasets/issues/7006
https://github.com/huggingface/datasets/issues/7006
7,006
CI is broken after ruff-0.5.0: E721
closed
0
2024-06-28T05:03:28
2024-06-28T05:25:18
2024-06-28T05:25:18
albertvillanova
[ "maintenance" ]
After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule. See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983 > src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks
false
2,378,424,349
https://api.github.com/repos/huggingface/datasets/issues/7005
https://github.com/huggingface/datasets/issues/7005
7,005
EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files
closed
3
2024-06-27T15:08:26
2024-06-28T09:56:19
2024-06-28T09:56:19
Aki1991
[]
### Describe the bug while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files" ### Steps to reproduce the bug This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file. Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub). ```` from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl") ```` error: ```` EmptyDatasetError Traceback (most recent call last) Cell In[18], line 3 1 from datasets import load_dataset ----> 3 dataset = load_dataset("imagefolder", 4 data_dir="path/to/jsonl/file/metadata.jsonl") 5 dataset[0]["objects"] File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2589 verification_mode = VerificationMode( 2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS 2591 ) 2593 # Create a dataset builder -> 2594 builder_instance = load_dataset_builder( 2595 path=path, 2596 name=name, 2597 data_dir=data_dir, 2598 data_files=data_files, 2599 cache_dir=cache_dir, 2600 features=features, 2601 download_config=download_config, 2602 download_mode=download_mode, 2603 revision=revision, 2604 token=token, 2605 storage_options=storage_options, 2606 trust_remote_code=trust_remote_code, 2607 _require_default_config_name=name is None, 2608 **config_kwargs, 2609 ) 2611 # Return iterable dataset in case of streaming 2612 if streaming: File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 2264 download_config = download_config.copy() if download_config else DownloadConfig() 2265 download_config.storage_options.update(storage_options) -> 2266 dataset_module = dataset_module_factory( 2267 path, 2268 revision=revision, 2269 download_config=download_config, 2270 download_mode=download_mode, 2271 data_dir=data_dir, 2272 data_files=data_files, 2273 cache_dir=cache_dir, 2274 trust_remote_code=trust_remote_code, 2275 _require_default_config_name=_require_default_config_name, 2276 _require_custom_configs=bool(config_kwargs), 2277 ) 2278 # Get dataset builder class from the processing script 2279 builder_kwargs = dataset_module.builder_kwargs File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) 1782 # We have several ways to get a dataset builder: 1783 # 1784 # - if path is the name of a packaged dataset module (...) 1796 1797 # Try packaged 1798 if path in _PACKAGED_DATASETS_MODULES: 1799 return PackagedDatasetModuleFactory( 1800 path, 1801 data_dir=data_dir, 1802 data_files=data_files, 1803 download_config=download_config, 1804 download_mode=download_mode, -> 1805 ).get_module() 1806 # Try locally 1807 elif path.endswith(filename): File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self) 1135 def get_module(self) -> DatasetModule: 1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix() 1137 patterns = ( 1138 sanitize_patterns(self.data_files) 1139 if self.data_files is not None -> 1140 else get_data_patterns(base_path, download_config=self.download_config) 1141 ) 1142 data_files = DataFilesDict.from_patterns( 1143 patterns, 1144 download_config=self.download_config, 1145 base_path=base_path, 1146 ) 1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config) 501 return _get_data_files_patterns(resolver) 502 except FileNotFoundError: --> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files` ``` ### Expected behavior It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files." ### Environment info I am using conda environment.
false
2,376,064,264
https://api.github.com/repos/huggingface/datasets/issues/7004
https://github.com/huggingface/datasets/pull/7004
7,004
Fix WebDatasets KeyError for user-defined Features when a field is missing in an example
closed
3
2024-06-26T18:58:05
2024-06-29T00:15:49
2024-06-28T09:30:12
ProGamerGov
[]
Fixes: https://github.com/huggingface/datasets/issues/6900 Not sure if this needs any addition stuff before merging
true
2,373,084,132
https://api.github.com/repos/huggingface/datasets/issues/7003
https://github.com/huggingface/datasets/pull/7003
7,003
minor fix for bfloat16
closed
2
2024-06-25T16:10:04
2024-06-25T16:16:11
2024-06-25T16:10:10
lhoestq
[]
null
true
2,373,010,351
https://api.github.com/repos/huggingface/datasets/issues/7002
https://github.com/huggingface/datasets/pull/7002
7,002
Fix dump of bfloat16 torch tensor
closed
2
2024-06-25T15:38:09
2024-06-25T16:10:16
2024-06-25T15:51:52
lhoestq
[]
close https://github.com/huggingface/datasets/issues/7000
true
2,372,930,879
https://api.github.com/repos/huggingface/datasets/issues/7001
https://github.com/huggingface/datasets/issues/7001
7,001
Datasetbuilder Local Download FileNotFoundError
open
1
2024-06-25T15:02:34
2024-06-25T15:21:19
null
purefall
[]
### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
false
2,372,887,585
https://api.github.com/repos/huggingface/datasets/issues/7000
https://github.com/huggingface/datasets/issues/7000
7,000
IterableDataset: Unsupported ScalarType BFloat16
closed
3
2024-06-25T14:43:26
2024-06-25T16:04:00
2024-06-25T15:51:53
stoical07
[]
### Describe the bug `IterableDataset.from_generator` crashes when using BFloat16: ``` File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor args = (obj.detach().cpu().numpy(),) ^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug ```python import torch from datasets import IterableDataset def demo(x): yield {"x": x} x = torch.tensor([1.], dtype=torch.bfloat16) dataset = IterableDataset.from_generator( demo, gen_kwargs=dict(x=x), ) example = next(iter(dataset)) print(example) ``` ### Expected behavior Code sample should print: ```python {'x': tensor([1.], dtype=torch.bfloat16)} ``` ### Environment info ``` datasets==2.20.0 torch==2.2.2 ```
false
2,372,124,589
https://api.github.com/repos/huggingface/datasets/issues/6999
https://github.com/huggingface/datasets/pull/6999
6,999
Remove tasks
closed
2
2024-06-25T09:06:16
2024-08-21T09:07:07
2024-08-21T09:01:18
albertvillanova
[]
Remove tasks, as part of the 3.0 release.
true
2,371,973,926
https://api.github.com/repos/huggingface/datasets/issues/6998
https://github.com/huggingface/datasets/pull/6998
6,998
Fix tests using hf-internal-testing/librispeech_asr_dummy
closed
2
2024-06-25T07:59:44
2024-06-25T08:22:38
2024-06-25T08:13:42
albertvillanova
[]
Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet. Fix #6997.
true
2,371,966,127
https://api.github.com/repos/huggingface/datasets/issues/6997
https://github.com/huggingface/datasets/issues/6997
6,997
CI is broken for tests using hf-internal-testing/librispeech_asr_dummy
closed
0
2024-06-25T07:55:44
2024-06-25T08:13:43
2024-06-25T08:13:43
albertvillanova
[ "maintenance" ]
CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996 ``` FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other'] Right contains one more item: 'other' Full diff: [ 'clean', - 'other', ] FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None ``` Note that repository was recently converted to Parquet: https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a
false
2,371,841,671
https://api.github.com/repos/huggingface/datasets/issues/6996
https://github.com/huggingface/datasets/pull/6996
6,996
Remove deprecated code
closed
2
2024-06-25T06:54:40
2024-08-21T09:42:52
2024-08-21T09:35:06
albertvillanova
[]
Remove deprecated code, as part of the 3.0 release. First merge: - [x] #6983 - [x] #6987 - [x] #6999
true
2,370,713,475
https://api.github.com/repos/huggingface/datasets/issues/6995
https://github.com/huggingface/datasets/issues/6995
6,995
ImportError when importing datasets.load_dataset
closed
9
2024-06-24T17:07:22
2024-11-14T01:42:09
2024-06-25T06:11:37
Leo-Lsc
[]
### Describe the bug I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'. ### Steps to reproduce the bug 1. pip install git+https://github.com/huggingface/datasets 2. from datasets import load_dataset ### Expected behavior ImportError Traceback (most recent call last) Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1) ----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset [3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train") [4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test") File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7 1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors. [2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) # [3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License"); (...) [12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and [13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License. [15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0" ---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset [18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction [19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63 [61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc [62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs ---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import ( [64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo, [65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd, ... [70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) ) [71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile [72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)... ### Environment info Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub $ datasets-cli env Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module> File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module> from .arrow_dataset import Dataset File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py) (CS224S)
false
2,370,491,689
https://api.github.com/repos/huggingface/datasets/issues/6994
https://github.com/huggingface/datasets/pull/6994
6,994
Fix incorrect rank value in data splitting
closed
3
2024-06-24T15:07:47
2024-06-26T04:37:35
2024-06-25T16:19:17
yzhangcs
[]
Fix #6990.
true
2,370,444,104
https://api.github.com/repos/huggingface/datasets/issues/6993
https://github.com/huggingface/datasets/pull/6993
6,993
less script docs
closed
6
2024-06-24T14:45:28
2024-07-08T13:10:53
2024-06-27T09:31:21
lhoestq
[]
+ mark as legacy in some parts of the docs since we'll not build new features for script datasets
true