id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,988,571,317 | 6,400 | Safely load datasets by disabling execution of dataset loading script | closed | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code e... | true | 2023-11-10T23:48:29Z | 2024-06-13T15:56:13Z | 2024-06-13T15:56:13Z | irenedea | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6400 | false | [
"great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)",
"@julien-c that would be great!",
"We added the `trust_remote_code` argument to `load_dataset()` in `... |
1,988,368,503 | 6,399 | TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array | open | ### Describe the bug
Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets.
Thank you!
### Steps to repro... | true | 2023-11-10T20:48:46Z | 2024-06-22T00:13:48Z | null | y-hwang | NONE | null | null | 1 | 5 | 5 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6399 | false | [
"Seconding encountering this issue."
] |
1,987,786,446 | 6,398 | Remove redundant condition in builders | closed | Minor refactoring to remove redundant condition. | true | 2023-11-10T14:56:43Z | 2023-11-14T10:49:15Z | 2023-11-14T10:43:00Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6398 | 2023-11-14T10:43:00Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6398 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,987,622,152 | 6,397 | Raise a different exception for inexisting dataset vs files without known extension | closed | See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557
We have the same error for:
- https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist
- https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files withou... | true | 2023-11-10T13:22:14Z | 2023-11-22T15:12:34Z | 2023-11-22T15:12:34Z | severo | COLLABORATOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6397 | false | [] |
1,987,308,077 | 6,396 | Issue with pyarrow 14.0.1 | closed | See https://github.com/huggingface/datasets-server/pull/2089 for reference
```
from datasets import (Array2D, Dataset, Features)
feature_type = Array2D(shape=(2, 2), dtype="float32")
content = [[0.0, 0.0], [0.0, 0.0]]
features = Features({"col": feature_type})
dataset = Dataset.from_dict({"col": [content]}, fea... | true | 2023-11-10T10:02:12Z | 2023-11-14T10:23:30Z | 2023-11-14T10:23:30Z | severo | COLLABORATOR | null | null | 5 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6396 | false | [
"Looks like we should stop using `PyExtensionType` and use `ExtensionType` instead\r\n\r\nsee https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf",
"https://github.com/huggingface/datasets-server/pull/2089#pullrequestreview-1724449532\r\n\r\n> Yes, I understand now: they have disabled ... |
1,986,484,124 | 6,395 | Add ability to set lock type | closed | ### Feature request
Allow setting file lock type, maybe from an environment variable
Currently, it only depends on whether fnctl is available:
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16
### Motivation
In my environment... | true | 2023-11-09T22:12:30Z | 2023-11-23T18:50:00Z | 2023-11-23T18:50:00Z | leoleoasd | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6395 | false | [
"We've replaced our filelock implementation with the `filelock` package, so their repo is the right place to request this feature.\r\n\r\nIn the meantime, the following should work: \r\n```python\r\nimport filelock\r\nfilelock.FileLock = filelock.SoftFileLock\r\n\r\nimport datasets\r\n...\r\n```"
] |
1,985,947,116 | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | closed | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to ... | true | 2023-11-09T16:02:15Z | 2024-04-11T12:40:16Z | 2024-04-11T12:40:16Z | Modexus | CONTRIBUTOR | null | null | 9 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6394 | false | [
"Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. ",
"Just ran into this working on data lib that's attempting to achieve common interfaces across hf datasets, webdataset, native torch style datasets. The defacto standards for image tensor... |
1,984,913,259 | 6,393 | Filter occasionally hangs | closed | ### Describe the bug
A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm)
There is a trace produced
```
Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", l... | true | 2023-11-09T06:18:30Z | 2025-02-22T00:49:19Z | 2025-02-22T00:49:19Z | dakinggg | CONTRIBUTOR | null | null | 12 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6393 | false | [
"It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172",
"Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.",
"More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was ... |
1,984,369,545 | 6,392 | `push_to_hub` is not robust to hub closing connection | closed | ### Describe the bug
Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error:
```
Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it]
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/... | true | 2023-11-08T20:44:53Z | 2023-12-20T07:28:24Z | 2023-12-01T17:51:34Z | msis | NONE | null | null | 12 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6392 | false | [
"Hi! We made some improvements to `push_to_hub` to make it more robust a couple of weeks ago but haven't published a release in the meantime, so it would help if you could install `datasets` from `main` (`pip install https://github.com/huggingface/datasets`) and let us know if this improved version of `push_to_hub`... |
1,984,091,776 | 6,391 | Webdataset dataset builder | closed | Allow `load_dataset` to support the Webdataset format.
It allows users to download/stream data from local files or from the Hugging Face Hub.
Moreover it will enable the Dataset Viewer for Webdataset datasets on HF.
## Implementation details
- I added a new Webdataset builder
- dataset with TAR files are n... | true | 2023-11-08T17:31:59Z | 2024-05-22T16:51:08Z | 2023-11-28T16:33:10Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6391 | 2023-11-28T16:33:10Z | 5 | 4 | 0 | 4 | false | false | [] | https://github.com/huggingface/datasets/pull/6391 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I added an error message if the first examples don't appear to be in webdataset format\r\n```\r\n\"The TAR archives of the dataset should be in Webdataset format, \"\r\n\"but the files in the archive don't share the same prefix or th... |
1,983,725,707 | 6,390 | handle future deprecation argument | closed | getting this error:
```
/root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'.
return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0)
```
Since datasets supports arrow greater than 8.0.0, we need to handle both ... | true | 2023-11-08T14:21:25Z | 2023-11-21T02:10:24Z | 2023-11-14T15:15:59Z | winglian | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6390 | 2023-11-14T15:15:59Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6390 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,983,545,744 | 6,389 | Index 339 out of range for dataset of size 339 <-- save_to_file() | open | ### Describe the bug
When saving out some Audio() data.
The data is audio recordings with associated 'sentences'.
(They use the audio 'bytes' approach because they're clips within audio files).
Code is below the traceback (I can't upload the voice audio/text (it's not even me)).
```
Traceback (most recent call ... | true | 2023-11-08T12:52:09Z | 2023-11-24T09:14:13Z | null | jaggzh | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6389 | false | [
"Hi! Can you make the above reproducer self-contained by adding code that generates the data?",
"I managed a workaround eventually but I don't know what it was (I made a lot of changes to seq2seq). I'll try to include generating code in the future. (If I close, I don't know if you see it. Feel free to close; I'l... |
1,981,136,093 | 6,388 | How to create 3d medical imgae dataset? | open | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to... | true | 2023-11-07T11:27:36Z | 2023-11-07T11:28:53Z | null | QingYunA | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6388 | false | [] |
1,980,224,020 | 6,387 | How to load existing downloaded dataset ? | closed | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
... | true | 2023-11-06T22:51:44Z | 2023-11-16T18:07:01Z | 2023-11-16T18:07:01Z | liming-ai | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6387 | false | [
"Feel free to use `dataset.save_to_disk(...)`, then scp the directory containing the saved dataset and reload it on your other machine using `dataset = load_from_disk(...)`"
] |
1,979,878,014 | 6,386 | Formatting overhead | closed | ### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst... | true | 2023-11-06T19:06:38Z | 2023-11-06T23:56:12Z | 2023-11-06T23:56:12Z | d-miketa | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6386 | false | [
"Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.",
"I tracked it down to a quirk of my setup. Apologies."
] |
1,979,308,338 | 6,385 | Get an error when i try to concatenate the squad dataset with my own dataset | closed | ### Describe the bug
Hello,
I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last):
Cell In[9], line 1
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
File ~\ana... | true | 2023-11-06T14:29:22Z | 2023-11-06T16:50:45Z | 2023-11-06T16:50:45Z | CCDXDX | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6385 | false | [
"The `answers.text` field in the JSON dataset needs to be a list of strings, not a string.\r\n\r\nSo, here is the fixed code:\r\n```python\r\nfrom huggingface_hub import notebook_login\r\nfrom datasets import load_dataset\r\n\r\n\r\n\r\nnotebook_login(\"mymailadresse\", \"mypassword\")\r\nsquad = load_dataset(\"squ... |
1,979,117,069 | 6,384 | Load the local dataset folder from other place | closed | This is from https://github.com/huggingface/diffusers/issues/5573 | true | 2023-11-06T13:07:04Z | 2023-11-19T05:42:06Z | 2023-11-19T05:42:05Z | OrangeSodahub | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6384 | false | [
"Solved"
] |
1,978,189,389 | 6,383 | imagenet-1k downloads over and over | closed | ### Describe the bug
What could be causing this?
```
$ python3
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset("imagenet-1k")
Downloading builder ... | true | 2023-11-06T02:58:58Z | 2024-06-12T13:15:00Z | 2023-11-06T06:02:39Z | seann999 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6383 | false | [
"Have you solved this problem?"
] |
1,977,400,799 | 6,382 | Add CheXpert dataset for vision | open | ### Feature request
### Name
**CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison**
### Paper
https://arxiv.org/abs/1901.07031
### Data
https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2
### Motivation
CheXpert is one of the fund... | true | 2023-11-04T15:36:11Z | 2024-01-10T11:53:52Z | null | SauravMaheshkar | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement",
"dataset request"
] | https://github.com/huggingface/datasets/issues/6382 | false | [
"Hey @SauravMaheshkar ! Just responded to your email.\r\n\r\n_For transparency, copying part of my response here:_\r\nI agree, it would be really great to have this and other BenchMD datasets easily accessible on the hub.\r\n\r\nI think the main limiting factor is that the ChexPert dataset is currently hosted on th... |
1,975,028,470 | 6,381 | Add my dataset | closed | ## medical data
**Description:**
This dataset, named "medical data," is a collection of text data from various sources, carefully curated and cleaned for use in natural language processing (NLP) tasks. It consists of a diverse range of text, including articles, books, and online content, covering topics from scienc... | true | 2023-11-02T20:59:52Z | 2023-11-08T14:37:46Z | 2023-11-06T15:50:14Z | keyur536 | NONE | https://github.com/huggingface/datasets/pull/6381 | null | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6381 | true | [
"Hi! We do not host datasets in this repo. Instead, you should use `dataset.push_to_hub` to upload the dataset to the HF Hub.",
"@mariosasko could you provide me proper guide to push data on HF hub ",
"You can find this info here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingf... |
1,974,741,221 | 6,380 | Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET | open | This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections.
The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594.
Issue Symptoms & Behaviour:
- Download of a large archive file during dataset down... | true | 2023-11-02T17:28:23Z | 2023-11-02T17:31:19Z | null | RuntimeRacer | NONE | https://github.com/huggingface/datasets/pull/6380 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6380 | true | [] |
1,974,638,850 | 6,379 | Avoid redundant warning when encoding NumPy array as `Image` | closed | Avoid a redundant warning in `encode_np_array` by removing the identity check as NumPy `dtype`s can be equal without having identical `id`s.
Additionally, fix "unreachable" checks in `encode_np_array`. | true | 2023-11-02T16:37:58Z | 2023-11-06T17:53:27Z | 2023-11-02T17:08:07Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6379 | 2023-11-02T17:08:07Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6379 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,973,942,770 | 6,378 | Support pyarrow 14.0.0 | closed | Support `pyarrow` 14.0.0.
Fix #6377 and fix #6374 (root cause).
This fix is analog to a previous one:
- #6175 | true | 2023-11-02T10:25:10Z | 2023-11-02T15:24:28Z | 2023-11-02T15:15:44Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6378 | 2023-11-02T15:15:44Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6378 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,973,937,612 | 6,377 | Support pyarrow 14.0.0 | closed | Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375 | true | 2023-11-02T10:22:08Z | 2023-11-02T15:15:45Z | 2023-11-02T15:15:45Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6377 | false | [] |
1,973,927,468 | 6,376 | Caching problem when deleting a dataset | closed | ### Describe the bug
Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail.
### Steps to reproduce the bug
1. Create a dataset with n features per row
2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)`
3. Go on the hub, delete the repo at `YOUR_PATH`
4. Update... | true | 2023-11-02T10:15:58Z | 2023-12-04T16:53:34Z | 2023-12-04T16:53:33Z | clefourrier | MEMBER | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6376 | false | [
"Thanks for reporting! Can you also share the error message printed in step 5?",
"I did not store it at the time but I'll try to re-do a mwe next week to get it again",
"I haven't managed to reproduce this issue using a [notebook](https://colab.research.google.com/drive/1m6eduYun7pFTkigrCJAFgw0BghlbvXIL?usp=sha... |
1,973,877,879 | 6,375 | Temporarily pin pyarrow < 14.0.0 | closed | Temporarily pin `pyarrow` < 14.0.0 until permanent solution is found.
Hot fix #6374. | true | 2023-11-02T09:48:58Z | 2023-11-02T10:22:33Z | 2023-11-02T10:11:19Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6375 | 2023-11-02T10:11:19Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6375 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,973,857,428 | 6,374 | CI is broken: TypeError: Couldn't cast array | closed | See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
``` | true | 2023-11-02T09:37:06Z | 2023-11-02T10:11:20Z | 2023-11-02T10:11:20Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6374 | false | [] |
1,973,349,695 | 6,373 | Fix typo in `Dataset.map` docstring | closed | true | 2023-11-02T01:36:49Z | 2023-11-02T15:18:22Z | 2023-11-02T10:11:38Z | bryant1410 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6373 | 2023-11-02T10:11:38Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6373 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,972,837,794 | 6,372 | do not try to download from HF GCS for generator | closed | attempt to fix https://github.com/huggingface/datasets/issues/6371 | true | 2023-11-01T17:57:11Z | 2023-11-02T16:02:52Z | 2023-11-02T15:52:09Z | yundai424 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6372 | 2023-11-02T15:52:09Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6372 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,972,807,579 | 6,371 | `Dataset.from_generator` should not try to download from HF GCS | closed | ### Describe the bug
When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datas... | true | 2023-11-01T17:36:17Z | 2023-11-02T15:52:10Z | 2023-11-02T15:52:10Z | yundai424 | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6371 | false | [
"Indeed, setting `try_from_gcs` to `False` makes sense for `from_generator`.\r\n\r\nWe plan to deprecate and remove `try_from_hf_gcs` soon, as we can use Hub for file hosting now, but this is a good temporary fix.\r\n"
] |
1,972,073,909 | 6,370 | TensorDataset format does not work with Trainer from transformers | closed | ### Describe the bug
The model was built to do fine tunning on BERT model for relation extraction.
trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset`
However, in the document, the req... | true | 2023-11-01T10:09:54Z | 2023-11-29T16:31:08Z | 2023-11-29T16:31:08Z | jinzzasol | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6370 | false | [
"I figured it out. I found that `Trainer` does not work with TensorDataset even though the document says it uses it. Instead, I ended up creating a dictionary and converting it to a dataset using `dataset.Dataset.from_dict()`.\r\n\r\nI will leave this post open for a while. If someone knows a better approach, pleas... |
1,971,794,108 | 6,369 | Multi process map did not load cache file correctly | closed | ### Describe the bug
When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process.

 function returns bytes instead of PIL images even when image column is not part of "columns" | closed | ### Describe the bug
When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes.
Here is a minimal reproduction of the bug:
https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJU... | true | 2023-10-31T11:10:48Z | 2023-11-02T14:21:17Z | 2023-11-02T14:21:17Z | leot13 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6366 | false | [
"Thanks for reporting! I've opened a PR with a fix."
] |
1,970,140,392 | 6,365 | Parquet size grows exponential for categorical data | closed | ### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories ar... | true | 2023-10-31T10:29:02Z | 2023-10-31T10:49:17Z | 2023-10-31T10:49:17Z | aseganti | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6365 | false | [
"Wrong repo."
] |
1,969,136,106 | 6,364 | ArrowNotImplementedError: Unsupported cast from string to list using function cast_list | closed | Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?"... | true | 2023-10-30T20:14:01Z | 2023-10-31T19:21:23Z | 2023-10-31T19:21:23Z | divyakrishna-devisetty | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6364 | false | [
"You can use the following code to load this CSV with the list values preserved:\r\n```python\r\nfrom datasets import load_dataset\r\nimport ast\r\n\r\nconverters = {\r\n \"contexts\" : ast.literal_eval,\r\n \"ground_truths\" : ast.literal_eval,\r\n}\r\n\r\nds = load_dataset(\"csv\", data_files=\"golden_datas... |
1,968,891,277 | 6,363 | dataset.transform() hangs indefinitely while finetuning the stable diffusion XL | closed | ### Describe the bug
Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely.
### Steps to reproduce the bug
accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --... | true | 2023-10-30T17:34:05Z | 2023-11-22T00:29:21Z | 2023-11-22T00:29:21Z | bhosalems | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6363 | false | [
"I think the code hangs on the `accelerator.main_process_first()` context manager exit. To verify this, you can append a print statement to the end of the `accelerator.main_process_first()` block. \r\n\r\n\r\nIf the problem is in `with_transform`, it would help if you could share the error stack trace printed when... |
1,965,794,569 | 6,362 | Simplify filesystem logic | closed | Simplifies the existing filesystem logic (e.g., to avoid unnecessary if-else as mentioned in https://github.com/huggingface/datasets/pull/6098#issue-1827655071) | true | 2023-10-27T15:54:18Z | 2023-11-15T14:08:29Z | 2023-11-15T14:02:02Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6362 | 2023-11-15T14:02:02Z | 13 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6362 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,965,672,950 | 6,360 | Add support for `Sequence(Audio/Image)` feature in `push_to_hub` | closed | ### Feature request
Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards.
### Motivation
Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead ... | true | 2023-10-27T14:39:57Z | 2024-02-06T19:24:20Z | 2024-02-06T19:24:20Z | Laurent2916 | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6360 | false | [
"This issue stems from https://github.com/huggingface/datasets/blob/6d2f2a5e0fea3827eccfd1717d8021c15fc4292a/src/datasets/table.py#L2203-L2205\r\n\r\nI'll address it as part of https://github.com/huggingface/datasets/pull/6283.\r\n\r\nIn the meantime, this should work\r\n\r\n```python\r\nimport pyarrow as pa\r\nfro... |
1,965,378,583 | 6,359 | Stuck in "Resolving data files..." | open | ### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understa... | true | 2023-10-27T12:01:51Z | 2025-03-09T02:18:19Z | null | Luciennnnnnn | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6359 | false | [
"Most likely, the data file inference logic is the problem here.\r\n\r\nYou can run the following code to verify this:\r\n```python\r\nimport time\r\nfrom datasets.data_files import get_data_patterns\r\nstart_time = time.time()\r\nget_data_patterns(\"/path/to/img_dir\")\r\nend_time = time.time()\r\nprint(f\"Elapsed... |
1,965,014,595 | 6,358 | Mounting datasets cache fails due to absolute paths. | closed | ### Describe the bug
Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache.
### Steps to reproduce the bug
1. Create a datasets cache by downloading some data
2. Mount the dataset folder into a docker contain... | true | 2023-10-27T08:20:27Z | 2024-04-10T08:50:06Z | 2023-11-28T14:47:12Z | charliebudd | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6358 | false | [
"You may be able to make it work by tweaking some environment variables, such as [`HF_HOME`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/environment_variables#hfhome) or [`HF_DATASETS_CACHE`](https://huggingface.co/docs/datasets/cache#cache-directory).",
"> You may be able to make it wor... |
1,964,653,995 | 6,357 | Allow passing a multiprocessing context to functions that support `num_proc` | open | ### Feature request
Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done:
```python
dataset = dataset.map(_func, num_proc=2, mp_cont... | true | 2023-10-27T02:31:16Z | 2023-10-27T02:31:16Z | null | bryant1410 | CONTRIBUTOR | null | null | 0 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6357 | false | [] |
1,964,015,802 | 6,356 | Add `fsspec` version to the `datasets-cli env` command output | closed | ... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes. | true | 2023-10-26T17:19:25Z | 2023-10-26T18:42:56Z | 2023-10-26T18:32:21Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6356 | 2023-10-26T18:32:21Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6356 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,963,979,896 | 6,355 | More hub centric docs | closed | Let's have more hub-centric documentation in the datasets docs
Tutorials
- Add “Configure the dataset viewer” page
- Change order:
- Overview
- and more focused on the Hub rather than the library
- Then all the hub related things
- and mention how to read/write with other tools like pandas
- The... | true | 2023-10-26T16:54:46Z | 2024-01-11T06:34:16Z | 2023-10-30T17:32:57Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6355 | null | 3 | 1 | 0 | 1 | true | false | [] | https://github.com/huggingface/datasets/pull/6355 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,963,483,324 | 6,354 | `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` | open | ### Describe the bug
Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything.
Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions
Some exampes I've encountered:
```
File "/l... | true | 2023-10-26T12:43:36Z | 2024-12-10T14:06:06Z | null | NazyS | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6354 | false | [
"I am having issues as well with this. \r\n\r\nHowever, the error I am getting is :\r\n`RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more informati... |
1,962,646,450 | 6,353 | load_dataset save_to_disk load_from_disk error | closed | ### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird ha... | true | 2023-10-26T03:47:06Z | 2024-04-03T05:31:01Z | 2023-10-26T10:18:04Z | brisker | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6353 | false | [
"solved.\r\nfsspec version problem",
"I'm using the latest datasets and fsspec , but still got this error!\r\n\r\ndatasets : Version: 2.13.0\r\n\r\nfsspec Version: 2023.10.0\r\n\r\n```\r\nFile \"/home/guoby/app/Anaconda3-2021.05/envs/news/lib/python3.8/site-packages/datasets/load.py\", line 1892, in load_from_... |
1,962,296,057 | 6,352 | Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") | closed | I was trying to load the wiki dataset, but i got this error
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train')
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset
ds = builder_instance.as_dataset(split=split, verific... | true | 2023-10-25T21:55:31Z | 2024-03-19T16:46:22Z | 2023-11-07T07:26:54Z | Ahmed-Roushdy | NONE | null | null | 13 | 5 | 5 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6352 | false | [
"+1 \r\n```\r\nFound cached dataset csv (file:///home/ubuntu/.cache/huggingface/datasets/theSquarePond___csv/theSquarePond--XXXXX-bbf0a8365d693d2c/0.0.0/eea64c71ca8b46dd3f537ed218fc9bf495d5707789152eb2764f5c78fa66d59d)\r\n---------------------------------------------------------------------------\r\nNotImplementedE... |
1,961,982,988 | 6,351 | Fix use_dataset.mdx | closed | The current example isn't working because it can't find `labels` inside the Dataset object. So I've added an extra step to the process. Tested and working in Colab. | true | 2023-10-25T18:21:08Z | 2023-10-26T17:19:49Z | 2023-10-26T17:10:27Z | angel-luis | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6351 | 2023-10-26T17:10:27Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6351 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,961,869,203 | 6,350 | Different objects are returned from calls that should be returning the same kind of object. | open | ### Describe the bug
1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]')
2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir)
The only difference I would expect these cal... | true | 2023-10-25T17:08:39Z | 2023-10-26T21:03:06Z | null | phalexo | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6350 | false | [
"`load_dataset` returns a `DatasetDict` object unless `split` is defined, in which case it returns a `Dataset` (or a list of datasets if `split` is a list). We've discussed dropping `DatasetDict` from the API in https://github.com/huggingface/datasets/issues/5189 to always return the same type in `load_dataset` an... |
1,961,435,673 | 6,349 | Can't load ds = load_dataset("imdb") | closed | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as we... | true | 2023-10-25T13:29:51Z | 2024-03-20T15:09:53Z | 2023-10-31T19:59:35Z | vivianc2 | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6349 | false | [
"I'm unable to reproduce this error. The server hosting the files may have been down temporarily, so try again.",
"getting the same error",
"I am getting the following error:\r\nEnv: Python3.10\r\ndatasets: 2.10.1\r\nLinux: Amazon Linux2\r\n\r\n`Traceback (most recent call last):\r\n File \"<stdin>\", line 1, ... |
1,961,268,504 | 6,348 | Parquet stream-conversion fails to embed images/audio files from gated repos | open | it seems to be an issue with datasets not passing the token to embed_table_storage when generating a dataset
See https://github.com/huggingface/datasets-server/issues/2010 | true | 2023-10-25T12:12:44Z | 2025-04-17T12:21:43Z | null | severo | COLLABORATOR | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6348 | false | [
"I have created a project to stream audio in the datasets viewer on Hugging Face using Parquet.\n\nhttps://github.com/pr0mila/ParquetToHuggingFace"
] |
1,959,004,835 | 6,347 | Incorrect example code in 'Create a dataset' docs | closed | ### Describe the bug
On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect.
Currently, examples are:
``` python
from datasets import ImageFolder
dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon")
```
and
``` python... | true | 2023-10-24T11:01:21Z | 2023-10-25T13:05:21Z | 2023-10-25T13:05:21Z | rwood-97 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6347 | false | [
"This was fixed in https://github.com/huggingface/datasets/pull/6247. You can find the fix in the `main` version of the docs",
"Ah great, thanks :)"
] |
1,958,777,076 | 6,346 | Fix UnboundLocalError if preprocessing returns an empty list | closed | If this tokenization function is used with IterableDatasets and no sample is as big as the context length, `input_batch` will be an empty list.
```
def tokenize(batch, tokenizer, context_length):
outputs = tokenizer(
batch["text"],
truncation=True,
max_length=context_length,
r... | true | 2023-10-24T08:38:43Z | 2023-10-25T17:39:17Z | 2023-10-25T16:36:38Z | cwallenwein | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6346 | 2023-10-25T16:36:38Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6346 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,957,707,870 | 6,345 | support squad structure datasets using a YAML parameter | open | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should... | true | 2023-10-23T17:55:37Z | 2023-10-23T17:55:37Z | null | MajdTannous1 | NONE | null | null | 0 | 1 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6345 | false | [] |
1,957,412,169 | 6,344 | set dev version | closed | true | 2023-10-23T15:13:28Z | 2023-10-23T15:24:31Z | 2023-10-23T15:13:38Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6344 | 2023-10-23T15:13:38Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6344 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6344). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,957,370,711 | 6,343 | Remove unused argument in `_get_data_files_patterns` | closed | true | 2023-10-23T14:54:18Z | 2023-11-16T09:09:42Z | 2023-11-16T09:03:39Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6343 | 2023-11-16T09:03:39Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6343 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,957,344,445 | 6,342 | Release: 2.14.6 | closed | true | 2023-10-23T14:43:26Z | 2023-10-23T15:21:54Z | 2023-10-23T15:07:25Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6342 | 2023-10-23T15:07:25Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6342 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | |
1,956,917,893 | 6,340 | Release 2.14.5 | closed | (wrong release number - I was continuing the 2.14 branch but 2.14.5 was released from `main`) | true | 2023-10-23T11:10:22Z | 2023-10-23T14:20:46Z | 2023-10-23T11:12:40Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6340 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6340 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6340). All of your documentation changes will be reflected on that endpoint."
] |
1,956,912,627 | 6,339 | minor release step improvement | closed | true | 2023-10-23T11:07:04Z | 2023-11-07T10:38:54Z | 2023-11-07T10:32:41Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6339 | 2023-11-07T10:32:41Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6339 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,956,886,072 | 6,338 | pin fsspec before it switches to glob.glob | closed | true | 2023-10-23T10:50:54Z | 2024-01-11T06:32:56Z | 2023-10-23T10:51:52Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6338 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6338 | true | [
"closing in favor of https://github.com/huggingface/datasets/pull/6337",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6338). All of your documentation changes will be reflected on that endpoint."
] | |
1,956,875,259 | 6,337 | Pin supported upper version of fsspec | closed | Pin upper version of `fsspec` to avoid disruptions introduced by breaking changes (and the need of urgent patch releases with hotfixes) on each release on their side. See:
- #6331
- #6210
- #5731
- #5617
- #5447
I propose that we explicitly test, introduce fixes and support each new `fsspec` version release.
... | true | 2023-10-23T10:44:16Z | 2023-10-23T12:13:20Z | 2023-10-23T12:04:36Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6337 | 2023-10-23T12:04:36Z | 6 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6337 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,956,827,232 | 6,336 | unpin-fsspec | closed | Close #6333. | true | 2023-10-23T10:16:46Z | 2024-02-07T12:41:35Z | 2023-10-23T10:17:48Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6336 | 2023-10-23T10:17:48Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6336 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6336). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,956,740,818 | 6,335 | Support fsspec 2023.10.0 | closed | Fix #6333. | true | 2023-10-23T09:29:17Z | 2024-01-11T06:33:35Z | 2023-11-14T14:17:40Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6335 | null | 7 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6335 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,956,719,774 | 6,334 | datasets.filesystems: fix is_remote_filesystems | closed | Close #6330, close #6333.
`fsspec.implementations.LocalFilesystem.protocol`
was changed from `str` "file" to `tuple[str,...]` ("file", "local") in `fsspec>=2023.10.0`
This commit supports both styles. | true | 2023-10-23T09:17:54Z | 2024-02-07T12:41:15Z | 2023-10-23T10:14:10Z | ap-- | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6334 | 2023-10-23T10:14:10Z | 3 | 2 | 0 | 2 | false | false | [] | https://github.com/huggingface/datasets/pull/6334 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,956,714,423 | 6,333 | Support fsspec 2023.10.0 | closed | Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381 | true | 2023-10-23T09:14:53Z | 2024-02-07T12:39:58Z | 2024-02-07T12:39:58Z | albertvillanova | MEMBER | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6333 | false | [
"Hi @albertvillanova @lhoestq \r\n\r\nI believe the pull request that pins the fsspec version (https://github.com/huggingface/datasets/pull/6331) was merged by mistake. Another fix for the issue was merged on the same day an hour apart. See https://github.com/huggingface/datasets/pull/6334\r\n\r\nI'm now having an ... |
1,956,697,328 | 6,332 | Replace deprecated license_file in setup.cfg | closed | Replace deprecated license_file in `setup.cfg`.
See: https://github.com/huggingface/datasets/actions/runs/6610930650/job/17953825724?pr=6331
```
/tmp/pip-build-env-a51hls20/overlay/lib/python3.8/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
... | true | 2023-10-23T09:05:26Z | 2023-11-07T08:23:10Z | 2023-11-07T08:09:06Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6332 | 2023-11-07T08:09:06Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6332 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,956,671,256 | 6,331 | Temporarily pin fsspec < 2023.10.0 | closed | Temporarily pin fsspec < 2023.10.0 until permanent solution is found.
Hot fix #6330.
See: https://github.com/huggingface/datasets/actions/runs/6610904287/job/17953774987
```
...
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NotImplementedError: Loading a dataset cached in a LocalFileS... | true | 2023-10-23T08:51:50Z | 2023-10-23T09:26:42Z | 2023-10-23T09:17:55Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6331 | 2023-10-23T09:17:55Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6331 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,956,053,294 | 6,330 | Latest fsspec==2023.10.0 issue with streaming datasets | closed | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps ... | true | 2023-10-22T20:57:10Z | 2025-06-09T22:00:16Z | 2023-10-23T09:17:56Z | ZachNagengast | CONTRIBUTOR | null | null | 9 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6330 | false | [
"I also encountered a similar error below.\r\nAppreciate the team could shed some light on this issue.\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n[/home/ubuntu/work/EveryDream2trainer/pre... |
1,955,858,020 | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | closed | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | true | 2023-10-22T11:07:46Z | 2023-10-23T09:22:58Z | 2023-10-23T09:22:58Z | shabnam706 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6329 | false | [] |
1,955,857,904 | 6,328 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | closed | true | 2023-10-22T11:07:21Z | 2023-10-23T09:22:38Z | 2023-10-23T09:22:38Z | shabnam706 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6328 | false | [
"شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی"
] | |
1,955,470,755 | 6,327 | FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)` | closed | ### Describe the bug
Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs.
### Steps to reproduce the bug
I've downloaded the dataset and save it to the cache dir in advance. My hope is loadi... | true | 2023-10-21T12:27:03Z | 2023-10-23T18:50:07Z | 2023-10-23T18:50:07Z | yzhangcs | CONTRIBUTOR | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6327 | false | [
"You can clone the `togethercomputer/RedPajama-Data-1T-Sample` repo and load the dataset with `load_dataset(\"path/to/cloned_repo\")` to use it offline.",
"@mariosasko Thank you for your kind reply! I'll try it as a workaround.\r\nDoes that mean that currently it's not supported to simply load with a short name?"... |
1,955,420,536 | 6,326 | Create battery_analysis.py | closed | true | 2023-10-21T10:07:48Z | 2023-10-23T14:56:20Z | 2023-10-23T14:56:20Z | vinitkm | NONE | https://github.com/huggingface/datasets/pull/6326 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6326 | true | [] | |
1,955,420,178 | 6,325 | Create battery_analysis.py | closed | true | 2023-10-21T10:06:37Z | 2023-10-23T14:55:58Z | 2023-10-23T14:55:58Z | vinitkm | NONE | https://github.com/huggingface/datasets/pull/6325 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6325 | true | [] | |
1,955,126,687 | 6,324 | Conversion to Arrow fails due to wrong type heuristic | closed | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowI... | true | 2023-10-20T23:20:58Z | 2023-10-23T20:52:57Z | 2023-10-23T20:52:57Z | jphme | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6324 | false | [
"Unlike Pandas, Arrow is strict with types, so converting the problematic strings to ints (or ints to strings) to ensure all the values have the same type is the only fix. \r\n\r\nJSON support has been requested in Arrow [here](https://github.com/apache/arrow/issues/32538), but I don't expect this to be implemented... |
1,954,245,980 | 6,323 | Loading dataset from large GCS bucket very slow since 2.14 | open | ### Describe the bug
Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change:
https://github.com/huggingface/datasets/blame/bade7af74437347a76083... | true | 2023-10-20T12:59:55Z | 2024-09-03T18:42:33Z | null | jbcdnr | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6323 | false | [
"I've also encountered this issue recently and want to ask if this has been seen.\r\n\r\n@albertvillanova for visibility - I'm not sure who the right person is to tag, but I noticed you were active recently so perhaps you can direct this to the right person.\r\n\r\nThanks!"
] |
1,952,947,461 | 6,322 | Fix regex `get_data_files` formatting for base paths | closed | With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`:
- Input: `hf://datasets/...`
- Output: `hf:/datasets/...`... | true | 2023-10-19T19:45:10Z | 2023-10-23T14:40:45Z | 2023-10-23T14:31:21Z | ZachNagengast | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6322 | 2023-10-23T14:31:21Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6322 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> The reason why I used the the glob_pattern_to_regex in the entire pattern is because otherwise I got an error for Windows local paths: a base_path like 'C:\\\\Users\\\\runneradmin... made the function string_to_dict raise re.error:... |
1,952,643,483 | 6,321 | Fix typos | closed | true | 2023-10-19T16:24:35Z | 2023-10-19T17:18:00Z | 2023-10-19T17:07:35Z | python273 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6321 | 2023-10-19T17:07:35Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6321 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,952,618,316 | 6,320 | Dataset slice splits can't load training and validation at the same time | closed | ### Describe the bug
According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command:
`train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")`
to load the train and test sets from the dataset.
However ex... | true | 2023-10-19T16:09:22Z | 2023-11-30T16:21:15Z | 2023-11-30T16:21:15Z | timlac | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6320 | false | [
"The expression \"train+test\" concatenates the splits.\r\n\r\nThe individual splits as separate datasets can be obtained as follows:\r\n```python\r\ntrain_ds, test_ds = load_dataset(\"<dataset_name>\", split=[\"train\", \"test\"])\r\ntrain_10pct_ds, test_10pct_ds = load_dataset(\"<dataset_name>\", split=[\"train[:... |
1,952,101,717 | 6,319 | Datasets.map is severely broken | open | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing s... | true | 2023-10-19T12:19:33Z | 2024-08-08T17:05:08Z | null | phalexo | NONE | null | null | 15 | 6 | 6 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6319 | false | [
"Hi! Instead of processing a single example at a time, you should use the batched `map` for the best performance (with `num_proc=1`) - the fast tokenizers can process a batch's samples in parallel in that scenario.\r\n\r\nE.g., the following code in Colab takes an hour to complete:\r\n```python\r\n# !pip install da... |
1,952,100,706 | 6,318 | Deterministic set hash | closed | Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.
This is useful to get deterministic hashes of tokenizers that use a trie based on python sets.
reported in https://github.com/huggingface/datasets/issues/3847 | true | 2023-10-19T12:19:13Z | 2023-10-19T16:27:20Z | 2023-10-19T16:16:31Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6318 | 2023-10-19T16:16:31Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6318 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,951,965,668 | 6,317 | sentiment140 dataset unavailable | closed | ### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from data... | true | 2023-10-19T11:25:21Z | 2023-10-19T13:04:56Z | 2023-10-19T13:04:56Z | AndreasKarasenko | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6317 | false | [
"Thanks for reporting. We are investigating the issue.",
"We have opened an issue in the corresponding Hub dataset: https://huggingface.co/datasets/sentiment140/discussions/3\r\n\r\nLet's continue the discussion there."
] |
1,951,819,869 | 6,316 | Fix loading Hub datasets with CSV metadata file | closed | Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- correspon... | true | 2023-10-19T10:21:34Z | 2023-10-20T06:23:21Z | 2023-10-20T06:14:09Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6316 | 2023-10-20T06:14:09Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6316 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,951,800,819 | 6,315 | Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0 | closed | When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error:
```
E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
pyarrow/error.pxi:100: ArrowInvalid
```
See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1 | true | 2023-10-19T10:11:29Z | 2023-10-20T06:14:10Z | 2023-10-20T06:14:10Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6315 | false | [] |
1,951,684,763 | 6,314 | Support creating new branch in push_to_hub | closed | This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created. | true | 2023-10-19T09:12:39Z | 2023-10-19T09:20:06Z | 2023-10-19T09:19:48Z | jmif | NONE | https://github.com/huggingface/datasets/pull/6314 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6314 | true | [] |
1,951,527,712 | 6,313 | Fix commit message formatting in multi-commit uploads | closed | Currently, the commit message keeps on adding:
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00000-of-00002) (part 00001-of-00002)`
Introduced in https://github.com/huggingface/datasets/pull/6269
This PR fixes this issue to have
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset... | true | 2023-10-19T07:53:56Z | 2023-10-20T14:06:13Z | 2023-10-20T13:57:39Z | qgallouedec | MEMBER | https://github.com/huggingface/datasets/pull/6313 | 2023-10-20T13:57:38Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6313 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,950,128,416 | 6,312 | docs: resolving namespace conflict, refactored variable | closed | In docs of about_arrow.md, in the below example code

The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good conven... | true | 2023-10-18T16:10:59Z | 2023-10-19T16:31:59Z | 2023-10-19T16:23:07Z | smty2018 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6312 | 2023-10-19T16:23:07Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6312 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,949,304,993 | 6,311 | cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146 | closed | ### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] =... | true | 2023-10-18T09:38:05Z | 2024-02-06T19:24:20Z | 2024-02-06T19:24:20Z | neiblegy | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6311 | false | [
"Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in https://github.com/huggingface/datasets/pull/6283 (should be part of the next release).",
"> Thanks for reporting! We've spotted the bugs with the `array.values` handling and are fixing them in #6283 (should be p... |
1,947,457,988 | 6,310 | Add return_file_name in load_dataset | closed | Proposition to fix #5806.
Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output.
There is a difference between arrow-based and folder-based datasets to return the file name:
- fo... | true | 2023-10-17T13:36:57Z | 2024-08-09T11:51:55Z | 2024-07-31T13:56:50Z | juliendenize | NONE | https://github.com/huggingface/datasets/pull/6310 | null | 7 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6310 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6310). All of your documentation changes will be reflected on that endpoint.",
"> Thanks for the change !\r\n> \r\n> Since `return` in python often refers to what is actually returned by the function (here `load_dataset`), I th... |
1,946,916,969 | 6,309 | Fix get_data_patterns for directories with the word data twice | closed | Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice:
- For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`)
- The in... | true | 2023-10-17T09:00:39Z | 2023-10-18T14:01:52Z | 2023-10-18T13:50:35Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6309 | 2023-10-18T13:50:35Z | 7 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6309 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,946,810,625 | 6,308 | module 'resource' has no attribute 'error' | closed | ### Describe the bug
just run import:
`from datasets import load_dataset`
and then:
```
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow... | true | 2023-10-17T08:08:54Z | 2023-10-25T17:09:22Z | 2023-10-25T17:09:22Z | NeoWang9999 | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6308 | false | [
"This (Windows) issue was fixed in `fsspec` in https://github.com/fsspec/filesystem_spec/pull/1275. So, to avoid the error, update the `fsspec` installation with `pip install -U fsspec`.",
"> This (Windows) issue was fixed in `fsspec` in [fsspec/filesystem_spec#1275](https://github.com/fsspec/filesystem_spec/pul... |
1,946,414,808 | 6,307 | Fix typo in code example in docs | closed | true | 2023-10-17T02:28:50Z | 2023-10-17T12:59:26Z | 2023-10-17T06:36:19Z | bryant1410 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6307 | 2023-10-17T06:36:18Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6307 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,946,363,452 | 6,306 | pyinstaller : OSError: could not get source code | closed | ### Describe the bug
I ran a package with pyinstaller and got the following error:
### Steps to reproduce the bug
```
...
File "datasets\__init__.py", line 52, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_an... | true | 2023-10-17T01:41:51Z | 2023-11-02T07:24:51Z | 2023-10-18T14:03:42Z | dusk877647949 | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6306 | false | [
"more information:\r\n``` \r\nFile \"text2vec\\__init__.py\", line 8, in <module>\r\nFile \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\nFile \"... |
1,946,010,912 | 6,305 | Cannot load dataset with `2.14.5`: `FileNotFound` error | closed | ### Describe the bug
I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`.
### Steps to reproduce the bug
[Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg)
```python
D... | true | 2023-10-16T20:11:27Z | 2023-10-18T13:50:36Z | 2023-10-18T13:50:36Z | finiteautomata | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6305 | false | [
"Thanks for reporting, @finiteautomata.\r\n\r\nWe are investigating it. ",
"There is a bug in `datasets`. You can see our proposed fix:\r\n- #6309 "
] |
1,945,913,521 | 6,304 | Update README.md | closed | Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow | true | 2023-10-16T19:10:39Z | 2023-10-17T15:13:37Z | 2023-10-17T15:04:52Z | smty2018 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6304 | 2023-10-17T15:04:52Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6304 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,943,466,532 | 6,303 | Parquet uploads off-by-one naming scheme | open | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71... | true | 2023-10-14T18:31:03Z | 2023-10-16T16:33:21Z | null | ZachNagengast | CONTRIBUTOR | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6303 | false | [
"You can find the reasoning behind this naming scheme [here](https://github.com/huggingface/transformers/pull/16343#discussion_r931182168).\r\n\r\nThis point has been raised several times, so I'd be okay with starting with `00001-` (also to be consistent with the `transformers` sharding), but I'm not sure @lhoestq ... |
1,942,096,078 | 6,302 | ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size` | closed | ### Describe the bug
An example from [1], does not work when limiting shards with `max_shard_size`.
Try the following example with low `max_shard_size`, such as:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
The reason f... | true | 2023-10-13T14:43:36Z | 2023-10-17T06:52:12Z | 2023-10-17T06:52:11Z | Rassibassi | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6302 | false | [
"`writer._num_bytes` is updated every `writer_batch_size`-th call to the `write` method (default `writer_batch_size` is 1000 (examples)). You should be able to see the update by passing a smaller `writer_batch_size` to the `load_dataset_builder`.\r\n\r\nWe could improve this by supporting the string `writer_batch_s... |
1,940,183,999 | 6,301 | Unpin `tensorflow` maximum version | closed | Removes the temporary pin introduced in #6264 | true | 2023-10-12T14:58:07Z | 2023-10-12T15:58:20Z | 2023-10-12T15:49:54Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6301 | 2023-10-12T15:49:54Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6301 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,940,153,432 | 6,300 | Unpin `jax` maximum version | closed | fix #6299
fix #6202 | true | 2023-10-12T14:42:40Z | 2023-10-12T16:37:55Z | 2023-10-12T16:28:57Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6300 | 2023-10-12T16:28:57Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6300 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,939,649,238 | 6,299 | Support for newer versions of JAX | closed | ### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a lim... | true | 2023-10-12T10:03:46Z | 2023-10-12T16:28:59Z | 2023-10-12T16:28:59Z | ddrous | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6299 | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.