id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,089,713,945 | 6,604 | Transform fingerprint collisions due to setting fixed random seed | closed | ### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random... | true | 2024-01-19T06:32:25Z | 2024-01-26T15:05:35Z | 2024-01-26T15:05:35Z | normster | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6604 | false | [
"I've opened a PR with a fix.",
"I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html"
] |
2,089,230,766 | 6,603 | datasets map `cache_file_name` does not work | open | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you t... | true | 2024-01-18T23:08:30Z | 2024-01-28T04:01:15Z | null | ChenchaoZhao | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6603 | false | [
"Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?",
"```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_na... |
2,089,217,483 | 6,602 | Index error when data is large | open | ### Describe the bug
At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is
`total_size / m... | true | 2024-01-18T23:00:47Z | 2025-04-16T04:13:01Z | null | ChenchaoZhao | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6602 | false | [
"I'm facing this problem while doing my translation of [mteb/stackexchange-clustering](https://huggingface.co/datasets/mteb/stackexchange-clustering). each row has lots of samples (up to 100k samples), because in this dataset, each row represent multiple clusters.\nmy hack is to setting `max_shard_size` to 20Gb or ... |
2,088,624,054 | 6,601 | add safety checks when using only part of dataset | open | Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset. | true | 2024-01-18T16:16:59Z | 2024-02-08T14:33:10Z | null | benseddikismail | NONE | https://github.com/huggingface/datasets/pull/6601 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6601 | true | [
"Hi ! The metrics in `datasets` are deprecated in favor of https://github.com/huggingface/evaluate\r\n\r\nYou can open a PR here instead: https://huggingface.co/spaces/evaluate-metric/squad_v2/tree/main"
] |
2,088,446,385 | 6,600 | Loading CSV exported dataset has unexpected format | open | ### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to reproduce the bug
The documentation I've mainly cons... | true | 2024-01-18T14:48:27Z | 2024-01-23T14:42:32Z | null | OrianeN | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6600 | false | [
"Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:\r\n```python\r\ntest_dataset = load_dataset(\"opus100\", name=\"en-fr\", split=\"test\")\r\n\r\n# Save with .to_parquet()\r\ntest_parquet_path = \"try_testset_save.parquet\"\r\ntest_dataset.to_parquet(... |
2,086,684,664 | 6,599 | Easy way to segment into 30s snippets given an m4a file and a vtt file | closed | ### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segment... | true | 2024-01-17T17:51:40Z | 2024-01-23T10:42:17Z | 2024-01-22T15:35:49Z | RonanKMcGovern | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6599 | false | [
"Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.",
"That's fair. Thanks"
] |
2,084,236,605 | 6,598 | Unexpected keyword argument 'hf' when downloading CSV dataset from S3 | closed | ### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w... | true | 2024-01-16T15:16:01Z | 2025-01-31T15:35:33Z | 2024-07-23T14:30:10Z | dguenms | NONE | null | null | 8 | 11 | 11 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6598 | false | [
"I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. ",
"same thing happened to other formats like parquet",
"I am facing similar issue while reading a parquet file from s3.\r\ni try with every version between 2.14 to 2.16.1 but it dosen't work ",
"Re-def... |
2,083,708,521 | 6,597 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | closed | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | true | 2024-01-16T11:27:07Z | 2024-02-05T12:29:37Z | 2024-02-05T12:29:37Z | albertvillanova | MEMBER | null | null | 6 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6597 | false | [
"It is caused by these code lines: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1688-L1694",
"Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/datase... |
2,083,108,156 | 6,596 | Drop redundant None guard. | closed | `xxx if xxx is not None else None` is no-op. | true | 2024-01-16T06:31:54Z | 2024-01-16T17:16:16Z | 2024-01-16T17:05:52Z | xkszltl | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6596 | 2024-01-16T17:05:52Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6596 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6596). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,082,896,148 | 6,595 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | closed | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | true | 2024-01-16T02:03:09Z | 2024-01-27T18:26:33Z | 2024-01-26T02:28:32Z | kopyl | NONE | null | null | 14 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6595 | false | [
"Hi ! I think the issue comes from the \"float16\" features that are not supported yet in Parquet\r\n\r\nFeel free to open an issue in `pyarrow` about this. In the meantime, I'd encourage you to use \"float32\" for your \"pooled_prompt_embeds\" and \"prompt_embeds\" features.\r\n\r\nYou can cast them to \"float32\"... |
2,082,748,275 | 6,594 | IterableDataset sharding logic needs improvement | open | ### Describe the bug
The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes.
Splitting across num_workers (per train process loader processes) and... | true | 2024-01-15T22:22:36Z | 2024-10-15T06:27:13Z | null | rwightman | NONE | null | null | 1 | 3 | 3 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6594 | false | [
"I do not know is it the same probelm as mine. I think the num_workers should a value of process number for one dataloader mapped to one card, or the total number of processes for all multiple cards. \r\nbut when I set the num_workers larger then the count of training split files, it will report num_workers ... |
2,082,410,257 | 6,592 | Logs are delayed when doing .map when `docker logs` | closed | ### Describe the bug
When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed.
It's updating every few percent.
When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every co... | true | 2024-01-15T17:05:21Z | 2024-02-12T17:35:21Z | 2024-02-12T17:35:21Z | kopyl | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6592 | false | [
"Hi! `tqdm` doesn't work well in non-interactive environments, so there isn't much we can do about this. It's best to [disable it](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/utilities#datasets.disable_progress_bars) in such environments and instead use logging to track progress."
] |
2,082,378,957 | 6,591 | The datasets models housed in Dropbox can't support a lot of users downloading them | closed | ### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(... | true | 2024-01-15T16:43:38Z | 2024-01-22T23:18:09Z | 2024-01-22T23:18:09Z | RDaneelOlivav | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6591 | false | [
"Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo."
] |
2,082,000,084 | 6,590 | Feature request: Multi-GPU dataset mapping for SDXL training | open | ### Feature request
We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :)
### Motivation
Pre-computing 3 million of images takes around ... | true | 2024-01-15T13:06:06Z | 2024-01-15T13:07:07Z | null | kopyl | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6590 | false | [] |
2,081,358,619 | 6,589 | After `2.16.0` version, there are `PermissionError` when users use shared cache_dir | closed | ### Describe the bug
- We use shared `cache_dir` using `HF_HOME="{shared_directory}"`
- After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445
- But, `filelock` package make `.lock` file with `644` permission
- Dataset is not available to other users except the user who created the ... | true | 2024-01-15T06:46:27Z | 2024-02-02T07:55:38Z | 2024-01-30T15:28:38Z | minhopark-neubla | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6589 | false | [
"We'll do a new release of `datasets` in the coming days with a fix !",
"@lhoestq Thank you very much!"
] |
2,081,284,253 | 6,588 | fix os.listdir return name is empty string | closed | ### Describe the bug
xlistdir return name is empty string
Overloaded os.listdir
### Steps to reproduce the bug
```python
from datasets.download.streaming_download_manager import xjoin
from datasets.download.streaming_download_manager import xlistdir
config = DownloadConfig(storage_options=options)
manger = Str... | true | 2024-01-15T05:34:36Z | 2024-01-24T10:08:29Z | 2024-01-24T10:08:29Z | d710055071 | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6588 | false | [] |
2,080,348,016 | 6,587 | Allow concatenation of datasets with mixed structs | closed | Fixes #6466
The idea is to do a recursive check for structs. PyArrow handles it well enough.
For a demo you can do:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'e... | true | 2024-01-13T15:33:20Z | 2024-02-15T15:20:06Z | 2024-02-08T14:38:32Z | Dref360 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6587 | 2024-02-08T14:38:32Z | 3 | 2 | 0 | 2 | false | false | [] | https://github.com/huggingface/datasets/pull/6587 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"friendly bump",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<... |
2,079,192,651 | 6,586 | keep more info in DatasetInfo.from_merge #6585 | closed | * try not to merge DatasetInfos if they're equal
* fixes losing DatasetInfo during parallel Dataset.map | true | 2024-01-12T16:08:16Z | 2024-01-26T15:59:35Z | 2024-01-26T15:53:28Z | JochenSiegWork | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6586 | 2024-01-26T15:53:28Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6586 | true | [
"@JochenSiegWork fyi, that seems to also affect the `trainer.push_to_hub()` method, which I guess also needs to parse that DatasetInfo from the `kwargs` used by `push_to_hub`.\r\nThere is short discussion about it [here](https://github.com/huggingface/blog/issues/1623).\r\nWould be great if you can check if your PR... |
2,078,874,005 | 6,585 | losing DatasetInfo in Dataset.map when num_proc > 1 | open | ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo... | true | 2024-01-12T13:39:19Z | 2024-01-12T14:08:24Z | null | JochenSiegWork | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6585 | false | [
"Hi ! This issue comes from the fact that `map()` with `num_proc>1` shards the dataset in multiple chunks to be processed (one per process) and merges them. The DatasetInfos of each chunk are then merged together, but for some fields like `dataset_name` it's not been implemented and default to None.\r\n\r\nThe Data... |
2,078,454,878 | 6,584 | np.fromfile not supported | open | How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
... | true | 2024-01-12T09:46:17Z | 2024-01-15T05:20:50Z | null | d710055071 | CONTRIBUTOR | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6584 | false | [
"@lhoestq\r\nCan you provide me with some ideas?",
"Hi ! What's the error ?",
"@lhoestq \r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py\", line 198, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n ^^... |
2,077,049,491 | 6,583 | remove eli5 test | closed | since the dataset is defunct | true | 2024-01-11T16:05:20Z | 2024-01-11T16:15:34Z | 2024-01-11T16:09:24Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6583 | 2024-01-11T16:09:24Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6583 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6583). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,076,072,101 | 6,582 | Fix for Incorrect ex_iterable used with multi num_worker | closed | Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for work... | true | 2024-01-11T08:49:43Z | 2024-03-01T19:09:14Z | 2024-03-01T19:02:33Z | kq-chen | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6582 | 2024-03-01T19:02:33Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6582 | true | [
"A toy example to reveal the bug.\r\n\r\n```python\r\n\"\"\"\r\nDATASETS_VERBOSITY=debug torchrun --nproc-per-node 2 main.py \r\n\"\"\"\r\nimport torch.utils.data\r\nimport torch.distributed\r\nimport datasets.distributed\r\nimport datasets\r\n\r\n# num shards = 4\r\nshards = [(0, 100), (100, 200), (200, 300), (300... |
2,075,919,265 | 6,581 | fix os.listdir return name is empty string | closed | fix #6588
xlistdir return name is empty string
for example:
`
from datasets.download.streaming_download_manager import xjoin
from datasets.download.streaming_download_manager import xlistdir
config = DownloadConfig(storage_options=options)
manger = StreamingDownloadManager("ILSVRC2012",download_config=config... | true | 2024-01-11T07:10:55Z | 2024-01-24T10:14:43Z | 2024-01-24T10:08:28Z | d710055071 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6581 | 2024-01-24T10:08:28Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6581 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6581). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"\r\nObj [\"name\"] ends with \"/\"",
"@lhoestq \r\n\r\nhello,\r\nCan you help me chec... |
2,075,645,042 | 6,580 | dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs. | closed | ### Describe the bug
ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir.
### Steps to reproduce the bug
dataset = []
dataset_name = "ai2_arc"
possible_configs = [
'ARC-Challenge',
'ARC-Easy'
]
for config in possible_configs:
data... | true | 2024-01-11T03:14:18Z | 2024-01-20T12:46:16Z | 2024-01-20T12:46:16Z | kartikgupta321 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6580 | false | [] |
2,075,407,473 | 6,579 | Unable to load `eli5` dataset with streaming | closed | ### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This works correctly.
```
from datasets import lo... | true | 2024-01-10T23:44:20Z | 2024-01-11T09:19:18Z | 2024-01-11T09:19:17Z | haok1402 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6579 | false | [
"Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7\r\nLet's continue the discussion there!"
] |
2,074,923,321 | 6,578 | Faster webdataset streaming | closed | requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files
it can be enabled using block_size=0 in fsspec
cc @rwightman | true | 2024-01-10T18:18:09Z | 2024-01-30T18:46:02Z | 2024-01-30T18:39:51Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6578 | 2024-01-30T18:39:51Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6578 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6578). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I added faster streaming support using streaming Requests instances in `huggingface_hub... |
2,074,790,848 | 6,577 | 502 Server Errors when streaming large dataset | closed | ### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: http... | true | 2024-01-10T16:59:36Z | 2024-02-12T11:46:03Z | 2024-01-15T16:05:44Z | sanchit-gandhi | CONTRIBUTOR | null | null | 6 | 0 | 0 | 0 | null | false | [
"streaming"
] | https://github.com/huggingface/datasets/issues/6577 | false | [
"cc @mariosasko @lhoestq ",
"Hi! We should be able to avoid this error by retrying to read the data when it happens. I'll open a PR in `huggingface_hub` to address this.",
"Thanks for the fix @mariosasko! Just wondering whether \"500 error\" should also be excluded? I got these errors overnight:\r\n\r\n```\r\nh... |
2,073,710,124 | 6,576 | document page 404 not found after redirection | closed | ### Describe the bug
The redirected page encountered 404 not found.
### Steps to reproduce the bug
1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt
original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49
`... | true | 2024-01-10T06:48:14Z | 2024-01-17T14:01:31Z | 2024-01-17T14:01:31Z | annahung31 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6576 | false | [
"Thanks for reporting! I've opened a PR with a fix."
] |
2,072,617,406 | 6,575 | [IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding | closed | It was not taken into account e.g. when passing to a DataLoader with num_workers>0
Fix https://github.com/huggingface/datasets/issues/6565 | true | 2024-01-09T15:35:31Z | 2024-01-11T16:16:54Z | 2024-01-11T16:10:30Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6575 | 2024-01-11T16:10:30Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6575 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6575). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,072,579,549 | 6,574 | Fix tests based on datasets that used to have scripts | closed | ...now that `squad` and `paws` don't have a script anymore | true | 2024-01-09T15:16:16Z | 2024-01-09T16:11:33Z | 2024-01-09T16:05:13Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6574 | 2024-01-09T16:05:13Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6574 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6574). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,072,553,951 | 6,573 | [WebDataset] Audio support and bug fixes | closed | - Add audio support
- Fix an issue where user-provided features with additional fields are not taken into account
Close https://github.com/huggingface/datasets/issues/6569 | true | 2024-01-09T15:03:04Z | 2024-01-11T16:17:28Z | 2024-01-11T16:11:04Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6573 | 2024-01-11T16:11:04Z | 2 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6573 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6573). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,072,384,281 | 6,572 | Adding option for multipart achive download | closed | Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts.
With the new `multi_part` field of the `DownloadConfig` set, the downloa... | true | 2024-01-09T13:35:44Z | 2024-02-25T08:13:01Z | 2024-02-25T08:13:01Z | jpodivin | NONE | https://github.com/huggingface/datasets/pull/6572 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6572 | true | [
"On closer examination, this appears to be unnecessary. "
] |
2,072,111,000 | 6,571 | Make DatasetDict.column_names return a list instead of dict | open | Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values.
However, by construction, all splits have the same column names.
I think it makes more sense to return a single list with the column names, which is the same for all the split k... | true | 2024-01-09T10:45:17Z | 2024-01-09T10:45:17Z | null | albertvillanova | MEMBER | null | null | 0 | 1 | 1 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6571 | false | [] |
2,071,805,265 | 6,570 | No online docs for 2.16 release | closed | We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1).
In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index
 . But a new issue came up :( | true | 2024-01-08T08:03:58Z | 2024-01-13T04:53:04Z | null | kopyl | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6568 | false | [
"Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.\r\n\r\nAlthough i encountered a different problem – at 97% my python process just hung for around 11 minutes with no logs (when running dataset.map without `keep_in_memory=True` over around 3 million of dataset samples)..... |
2,069,808,842 | 6,567 | AttributeError: 'str' object has no attribute 'to' | closed | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer =... | true | 2024-01-08T06:40:21Z | 2024-01-08T11:56:19Z | 2024-01-08T10:03:17Z | andysingal | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6567 | false | [
"I think you are reporting an issue with the `transformers` library. Note this is the repository of the `datasets` library. I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues\r\n\r\nEDIT: I have not the rights to transfer the issue\r\n~~I am transferring your ... |
2,069,495,429 | 6,566 | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets | closed | ### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
File "/home/mini... | true | 2024-01-08T02:37:03Z | 2024-06-02T14:24:39Z | 2024-05-17T09:40:14Z | HelloWorldBeginner | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6566 | false | [
"I also see the same error and get passed it by casting that line to float. \r\n\r\nso `for x in obj.detach().cpu().numpy()` becomes `for x in obj.detach().to(torch.float).cpu().numpy()`\r\n\r\nI got the idea from [this ](https://github.com/kohya-ss/sd-webui-additional-networks/pull/128/files) PR where someone was... |
2,068,939,670 | 6,565 | `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader | closed | ### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't ha... | true | 2024-01-07T02:46:50Z | 2025-03-08T09:46:05Z | 2024-01-11T16:10:31Z | naba89 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6565 | false | [
"My current workaround this issue is to return `None` in the second element and then filter out samples which have `None` in them.\r\n\r\n```python\r\ndef merge_samples(batch):\r\n if len(batch['a']) == 1:\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [None]\r\n else:\r\n batch['c'] ... |
2,068,893,194 | 6,564 | `Dataset.filter` missing `with_rank` parameter | closed | ### Describe the bug
The issue shall be open: https://github.com/huggingface/datasets/issues/6435
When i try to pass `with_rank` to `Dataset.filter()`, i get this:
`Dataset.filter() got an unexpected keyword argument 'with_rank'`
### Steps to reproduce the bug
Run notebook:
https://colab.research.google.com... | true | 2024-01-06T23:48:13Z | 2024-01-29T16:36:55Z | 2024-01-29T16:36:54Z | kopyl | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6564 | false | [
"Thanks for reporting! I've opened a PR with a fix",
"@mariosasko thank you very much :)"
] |
2,068,302,402 | 6,563 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | closed | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_... | true | 2024-01-06T02:28:54Z | 2024-03-14T02:59:42Z | 2024-01-06T16:13:27Z | wasertech | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6563 | false | [
"@Wauplin Do you happen to know what's up?",
"<del>Installing `datasets` from `main` did the trick so I guess it will be fixed in the next release.\r\n\r\nNVM https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/utils/info_utils.py#L5",
"@wasertech upgrading `huggin... |
2,067,904,504 | 6,562 | datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function | open | ### Describe the bug
I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow).
Seems that... | true | 2024-01-05T19:10:25Z | 2024-01-05T19:10:25Z | null | LsTam91 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6562 | false | [] |
2,067,404,951 | 6,561 | Document YAML configuration with "data_dir" | open | See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference | true | 2024-01-05T14:03:33Z | 2024-01-05T14:06:18Z | null | severo | COLLABORATOR | null | null | 1 | 0 | 0 | 0 | null | false | [
"documentation"
] | https://github.com/huggingface/datasets/issues/6561 | false | [
"In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits)\r\n\r\n```\r\n---\r\nconfigs:\r\n- config_name: default\r\n data_files:\r\n - split: train\r\n path: \"data/*.csv\"\r\n - split: test\r\n ... |
2,065,637,625 | 6,560 | Support Video | closed | ### Feature request
HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :)
### Motivation
Video generation :)
### Your contribution
Will probably be limited to raising this feature request ;) | true | 2024-01-04T13:10:58Z | 2024-08-23T09:51:27Z | 2024-08-23T09:51:27Z | yuvalkirstain | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"duplicate",
"enhancement"
] | https://github.com/huggingface/datasets/issues/6560 | false | [
"duplicate of #5225"
] |
2,065,118,332 | 6,559 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | closed | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script su... | true | 2024-01-04T07:04:48Z | 2024-04-03T10:40:53Z | 2024-01-05T01:26:25Z | zhulinJulia24 | NONE | null | null | 8 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6559 | false | [
"Hi ! The \"allenai--c4\" config doesn't exist (this naming schema comes from old versions of `datasets`)\r\n\r\nYou can load it this way instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ncache_dir = 'path/to/your/cache/directory'\r\ndataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-t... |
2,064,885,984 | 6,558 | OSError: image file is truncated (1 bytes not processed) #28323 | closed | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number... | true | 2024-01-04T02:15:13Z | 2024-02-21T00:38:12Z | 2024-02-21T00:38:12Z | andysingal | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6558 | false | [
"You can add \r\n\r\n```python\r\nfrom PIL import ImageFile\r\nImageFile.LOAD_TRUNCATED_IMAGES = True\r\n```\r\n\r\nafter the imports to be able to read truncated images."
] |
2,064,341,965 | 6,557 | Support standalone yaml | closed | see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679 | true | 2024-01-03T16:47:35Z | 2024-01-11T17:59:51Z | 2024-01-11T17:53:42Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6557 | 2024-01-11T17:53:42Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6557 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6557). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq \r\nhello\r\nI think it should be defined in config.py\r\nDATASET_ README_ FIL... |
2,064,018,208 | 6,556 | Fix imagefolder with one image | closed | A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case.
e.g. for https://huggingface.co/datasets/mu... | true | 2024-01-03T13:13:02Z | 2024-02-12T21:57:34Z | 2024-01-09T13:06:30Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6556 | 2024-01-09T13:06:30Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6556 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6556). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Fixed in dataset viewer: https://huggingface.co/datasets/multimodalart/repro_1_image\r\... |
2,063,841,286 | 6,555 | Do not use Parquet exports if revision is passed | closed | Fix #6554. | true | 2024-01-03T11:33:10Z | 2024-02-02T10:41:33Z | 2024-02-02T10:35:28Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6555 | 2024-02-02T10:35:28Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6555 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6555). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"As shared on slack, `HubDatasetModuleFactoryWithParquetExport` raises a `DatasetsServer... |
2,063,839,916 | 6,554 | Parquet exports are used even if revision is passed | closed | We should not used Parquet exports if `revision` is passed.
I think this is a regression. | true | 2024-01-03T11:32:26Z | 2024-02-02T10:35:29Z | 2024-02-02T10:35:29Z | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6554 | false | [
"I don't think this bug is a thing ? Do you have some code that leads to this issue ?"
] |
2,063,474,183 | 6,553 | Cannot import name 'load_dataset' from .... module ‘datasets’ | closed | ### Describe the bug
use python -m pip install datasets to install
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
it doesn't work
### Environment info
datasets version==2.15.0
python == 3.10.12
linux version I don't know?? | true | 2024-01-03T08:18:21Z | 2024-02-21T00:38:24Z | 2024-02-21T00:38:24Z | ciaoyizhen | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6553 | false | [
"I don't know My conpany conputer cannot work. but in my computer, it work?",
"Do you have a folder in your working directory called datasets?"
] |
2,063,157,187 | 6,552 | Loading a dataset from Google Colab hangs at "Resolving data files". | closed | ### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:

It is happening when the `_get_origin_metadata` definition is invoked:
```python
d... | true | 2024-01-03T02:18:17Z | 2024-01-08T10:09:04Z | 2024-01-08T10:09:04Z | KelSolaar | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6552 | false | [
"This bug comes from the `huggingface_hub` library, see: https://github.com/huggingface/huggingface_hub/issues/1952\r\n\r\nA fix is provided at https://github.com/huggingface/huggingface_hub/pull/1953. Feel free to install `huggingface_hub` from this PR, or wait for it to be merged and the new version of `huggingfa... |
2,062,768,400 | 6,551 | Fix parallel downloads for datasets without scripts | closed | Enable parallel downloads using multiprocessing when `num_proc` is passed to `load_dataset`.
It was enabled for datasets with scripts already (if they passed lists to `dl_manager.download`) but not for no-script datasets (we pass dicts {split: [list of files]} to `dl_manager.download` for those ones).
I fixed thi... | true | 2024-01-02T18:06:18Z | 2024-01-06T20:14:57Z | 2024-01-03T13:19:48Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6551 | 2024-01-03T13:19:47Z | 4 | 2 | 0 | 2 | false | false | [] | https://github.com/huggingface/datasets/pull/6551 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6551). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,062,556,493 | 6,550 | Multi gpu docs | closed | after discussions in https://github.com/huggingface/datasets/pull/6415 | true | 2024-01-02T15:11:58Z | 2024-01-31T13:45:15Z | 2024-01-31T13:38:59Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6550 | 2024-01-31T13:38:59Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6550 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6550). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @lhoestq . This is a very important fix for code to run on multiple GPUs. Otherw... |
2,062,420,259 | 6,549 | Loading from hf hub with clearer error message | open | ### Feature request
Shouldn't this kinda work ?
```
Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json")
```
I got an error
```
File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, al... | true | 2024-01-02T13:26:34Z | 2024-01-02T14:06:49Z | null | thomwolf | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6549 | false | [
"Maybe we can add a helper message like `Maybe try again using \"hf://path/without/resolve\"` if the path contains `/resolve/` ?\r\n\r\ne.g.\r\n\r\n```\r\nFileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'\r\nIt looks like you used parts of the ... |
2,061,047,984 | 6,548 | Skip if a dataset has issues | open | ### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10... | true | 2023-12-31T12:41:26Z | 2024-01-02T10:33:17Z | null | hadianasliwa | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6548 | false | [
"It looks like a transient DNS issue. It should work fine now if you try again.\r\n\r\nThere is no parameter in load_dataset to skip failed downloads. In your case it would have skipped every single subsequent download until the DNS issue was resolved anyway."
] |
2,060,796,927 | 6,547 | set dev version | closed | true | 2023-12-30T16:47:17Z | 2023-12-30T16:53:38Z | 2023-12-30T16:47:27Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6547 | 2023-12-30T16:47:27Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6547 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,060,796,369 | 6,546 | Release: 2.16.1 | closed | true | 2023-12-30T16:44:51Z | 2023-12-30T16:52:07Z | 2023-12-30T16:45:52Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6546 | 2023-12-30T16:45:52Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6546 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6546). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,060,789,507 | 6,545 | `image` column not automatically inferred if image dataset only contains 1 image | closed | ### Describe the bug
By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset.
However, if the dataset contains only 1 image, this does not take place
### Steps to reproduce the bug
Input
(dataset with one image `multimodalart/repro_1_image`)
```py
from data... | true | 2023-12-30T16:17:29Z | 2024-01-09T13:06:31Z | 2024-01-09T13:06:31Z | apolinario | NONE | null | null | 0 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6545 | false | [] |
2,060,782,594 | 6,544 | Fix custom configs from script | closed | We should not use the parquet export when the user is passing config_kwargs
I also fixed a regression that would disallow creating a custom config when a dataset has multiple predefined configs
fix https://github.com/huggingface/datasets/issues/6533 | true | 2023-12-30T15:51:25Z | 2024-01-02T11:02:39Z | 2023-12-30T16:09:49Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6544 | 2023-12-30T16:09:49Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6544 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,060,776,174 | 6,543 | Fix dl_manager.extract returning FileNotFoundError | closed | The dl_manager base path is remote (e.g. a hf:// path), so local cached paths should be passed as absolute paths.
This could happen if users provide a relative path as `cache_dir`
fix https://github.com/huggingface/datasets/issues/6536 | true | 2023-12-30T15:24:50Z | 2023-12-30T16:00:06Z | 2023-12-30T15:53:59Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6543 | 2023-12-30T15:53:59Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6543 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6543). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,059,198,575 | 6,542 | Datasets : wikipedia 20220301.en error | closed | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurre... | true | 2023-12-29T08:34:51Z | 2024-01-02T13:21:06Z | 2024-01-02T13:20:30Z | ppx666 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6542 | false | [
"Hi ! We now recommend using the `wikimedia/wikipedia` dataset, can you try loading this one instead ?\r\n\r\n```python\r\nwiki_dataset = load_dataset(\"wikimedia/wikipedia\", \"20231101.en\")\r\n```",
"This bug has been fixed in `2.16.1` thanks to https://github.com/huggingface/datasets/pull/6544, feel free to ... |
2,058,983,826 | 6,541 | Dataset not loading successfully. | closed | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
... | true | 2023-12-29T01:35:47Z | 2024-01-17T00:40:46Z | 2024-01-17T00:40:45Z | hisushanta | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6541 | false | [
"This is a problem with your environment. You should be able to fix it by upgrading `numpy` based on [this](https://github.com/numpy/numpy/issues/23570) issue.",
"Bro I already update numpy package.",
"Then, this shouldn't throw an error on your machine:\r\n```python\r\nimport numpy\r\nnumpy._no_nep50_warning\r... |
2,058,965,157 | 6,540 | Extreme inefficiency for `save_to_disk` when merging datasets | open | ### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much!
###... | true | 2023-12-29T00:44:35Z | 2023-12-30T15:05:48Z | null | KatarinaYuan | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6540 | false | [
"Concatenating datasets doesn't create any indices mapping - so flattening indices is not needed (unless you shuffle the dataset).\r\nCan you share the snippet of code you are using to merge your datasets and save them to disk ?"
] |
2,058,493,960 | 6,539 | 'Repo card metadata block was not found' when loading a pragmeval dataset | open | ### Describe the bug
I can't load dataset subsets of 'pragmeval'.
The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab usi... | true | 2023-12-28T14:18:25Z | 2023-12-28T14:18:37Z | null | lambdaofgod | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6539 | false | [] |
2,057,377,630 | 6,538 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | closed | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | true | 2023-12-27T13:31:16Z | 2024-01-03T10:06:47Z | 2024-01-03T10:04:58Z | Sonali-Behera-TRT | NONE | null | null | 15 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6538 | false | [
"Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error",
"I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on... |
2,057,132,173 | 6,537 | Adding support for netCDF (*.nc) files | open | ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub throu... | true | 2023-12-27T09:27:29Z | 2023-12-27T20:46:53Z | null | shermansiu | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6537 | false | [
"Related to #3113 ",
"Conceptually, we can use xarray to load the netCDF file, then xarray -> pandas -> pyarrow.",
"I'd still need to verify that such a conversion would be lossless, especially for multi-dimensional data."
] |
2,056,863,239 | 6,536 | datasets.load_dataset raises FileNotFoundError for datasets==2.16.0 | closed | ### Describe the bug
Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0`
### Steps to reproduce the bug
For example `pip install datasets==2.16.0`
then
```python
import datasets
datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_di... | true | 2023-12-27T03:15:48Z | 2023-12-30T18:58:04Z | 2023-12-30T15:54:00Z | ArvinZhuang | NONE | null | null | 2 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6536 | false | [
"Hi ! Thanks for reporting\r\n\r\nThis is a bug in 2.16.0 for some datasets when `cache_dir` is a relative path. I opened https://github.com/huggingface/datasets/pull/6543 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] |
2,056,264,339 | 6,535 | IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT | open | ### Describe the bug
I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without-
model = get_peft_model(model, config)
the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets-
IndexError: Inv... | true | 2023-12-26T10:14:33Z | 2024-02-05T08:42:31Z | null | MahavirDabas18 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6535 | false | [
"@sabman @pvl @kashif @vigsterkr ",
"This is surely the same issue as https://discuss.huggingface.co/t/indexerror-invalid-key-16-is-out-of-bounds-for-size-0/14298/25 that comes from the `transformers` `Trainer`. You should add `remove_unused_columns=False` to `TrainingArguments`\r\n\r\nAlso check your logs: the `... |
2,056,002,548 | 6,534 | How to configure multiple folders in the same zip package | open | How should I write "config" in readme when all the data, such as train test, is in a zip file
train floder and test floder in data.zip | true | 2023-12-26T03:56:20Z | 2023-12-26T06:31:16Z | null | d710055071 | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6534 | false | [
"@albertvillanova"
] |
2,055,929,101 | 6,533 | ted_talks_iwslt | Error: Config name is missing | closed | ### Describe the bug
Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing"
see also:
https://huggingface.co/datasets/ted_talks_iwslt/discussions/3
likely caused by #6493, where the `and not config_kwargs` part... | true | 2023-12-26T00:38:18Z | 2023-12-30T18:58:21Z | 2023-12-30T16:09:50Z | rayliuca | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6533 | false | [
"Hi ! Thanks for reporting. I opened https://github.com/huggingface/datasets/pull/6544 to fix this",
"We just released 2.16.1 with a fix:\r\n\r\n```\r\npip install -U datasets\r\n```"
] |
2,055,631,201 | 6,532 | [Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id | open | ### Feature request
Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via r... | true | 2023-12-25T11:37:10Z | 2025-05-05T13:25:24Z | null | Yu-Shi | NONE | null | null | 10 | 7 | 7 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6532 | false | [
"You can simply use a python dict as index:\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> ds = load_dataset(\"BeIR/dbpedia-entity\", \"corpus\", split=\"corpus\")\r\n>>> index = {key: idx for idx, key in enumerate(ds[\"_id\"])}\r\n>>> ds[index[\"<dbpedia:Pikachu>\"]]\r\n{'_id': '<dbpedia:Pikachu>... |
2,055,201,605 | 6,531 | Add polars compatibility | closed | Hey there,
I've just finished adding support to convert and format to `polars.DataFrame`. This was in response to the open issue about integrating Polars [#3334](https://github.com/huggingface/datasets/issues/3334). Datasets can be switched to Polars format via `Dataset.set_format("polars")`. I've also included `to_... | true | 2023-12-24T20:03:23Z | 2024-03-08T19:29:25Z | 2024-03-08T15:22:58Z | psmyth94 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6531 | 2024-03-08T15:22:58Z | 7 | 8 | 2 | 4 | false | false | [] | https://github.com/huggingface/datasets/pull/6531 | true | [
"Hi ! thanks for adding polars support :)\r\n\r\nYou added from_polars in arrow_dataset.py but not to_polars, is this on purpose ?\r\n\r\nAlso no need to touch table.py imo, which is for arrow-only logic (tables are just wrappers of pyarrow.Table with the exact same methods + optimization to existing methods + sepa... |
2,054,817,609 | 6,530 | Impossible to save a mapped dataset to disk | open | ### Describe the bug
I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).
After... | true | 2023-12-23T15:18:27Z | 2023-12-24T09:40:30Z | null | kopyl | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6530 | false | [
"I solved it with `train_dataset.with_format(None)`\r\nBut then faced some more issues (which i later solved too).\r\n\r\nHuggingface does not seem to care, so I do. Here is an updated training script which saves a pre-processed (mapped) dataset to your local directory if you specify `--save_precomputed_data_dir=DI... |
2,054,209,449 | 6,529 | Impossible to only download a test split | open | I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.
Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed b... | true | 2023-12-22T16:56:32Z | 2024-02-02T00:05:04Z | null | ysig | NONE | null | null | 2 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6529 | false | [
"The only way right now is to load with streaming=True",
"This feature has been proposed for a long time. I'm looking forward to the implementation. On clusters `streaming=True` is not an option since we do not have Internet on compute nodes. See: https://github.com/huggingface/datasets/discussions/1896#discussio... |
2,053,996,494 | 6,528 | set dev version | closed | true | 2023-12-22T14:23:18Z | 2023-12-22T14:31:42Z | 2023-12-22T14:25:34Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6528 | 2023-12-22T14:25:34Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6528 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6528). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,053,966,748 | 6,527 | Release: 2.16.0 | closed | true | 2023-12-22T13:59:56Z | 2023-12-22T14:24:12Z | 2023-12-22T14:17:55Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6527 | 2023-12-22T14:17:55Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6527 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6527). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,053,726,451 | 6,526 | Preserve order of configs and splits when using Parquet exports | closed | Preserve order of configs and splits, as defined in dataset infos.
Fix #6521. | true | 2023-12-22T10:35:56Z | 2023-12-22T11:42:22Z | 2023-12-22T11:36:14Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6526 | 2023-12-22T11:36:14Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6526 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6526). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,053,119,357 | 6,525 | BBox type | closed | see [internal discussion](https://huggingface.slack.com/archives/C02EK7C3SHW/p1703097195609209)
Draft to get some feedback on a possible `BBox` feature type that can be used to get object detection bounding boxes data in one format or another.
```python
>>> from datasets import load_dataset, BBox
>>> ds = load_... | true | 2023-12-21T22:13:27Z | 2024-01-11T06:34:51Z | 2023-12-21T22:39:27Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6525 | null | 2 | 0 | 0 | 0 | true | false | [] | https://github.com/huggingface/datasets/pull/6525 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6525). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"closing in favor of other ideas that would not involve any typing"
] |
2,053,076,311 | 6,524 | Streaming the Pile: Missing Files | closed | ### Describe the bug
The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved.
### Steps to reproduce the bug
To reproduce run the following code:
```
from datasets import load_dataset
dataset = load_dataset('EleutherAI/pile', 'en', split='train', streamin... | true | 2023-12-21T21:25:09Z | 2023-12-22T09:17:05Z | 2023-12-22T09:17:05Z | FelixLabelle | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6524 | false | [
"Hello @FelixLabelle,\r\n\r\nAs you can see in the Community tab of the corresponding dataset, it is a known issue: https://huggingface.co/datasets/EleutherAI/pile/discussions/15\r\n\r\nThe data has been taken down due to reported copyright infringement.\r\n\r\nFeel free to continue the discussion there."
] |
2,052,643,484 | 6,523 | fix tests | closed | true | 2023-12-21T15:36:21Z | 2023-12-21T15:56:54Z | 2023-12-21T15:50:38Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6523 | 2023-12-21T15:50:38Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6523 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6523). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,052,332,528 | 6,522 | Loading HF Hub Dataset (private org repo) fails to load all features | open | ### Describe the bug
When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default?
### Steps to reproduce the ... | true | 2023-12-21T12:26:35Z | 2023-12-21T13:24:31Z | null | versipellis | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6522 | false | [] |
2,052,229,538 | 6,521 | The order of the splits is not preserved | closed | We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order.
Check: In branch "main"
```python
In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA")
In [10]: dataset
Out[10]:
DatasetDict({
... | true | 2023-12-21T11:17:27Z | 2023-12-22T11:36:15Z | 2023-12-22T11:36:15Z | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6521 | false | [
"After investigation, I think the issue was introduced by the use of the Parquet export:\r\n- #6448\r\n\r\nI am proposing a fix.\r\n\r\nCC: @lhoestq "
] |
2,052,059,078 | 6,520 | Support commit_description parameter in push_to_hub | closed | Support `commit_description` parameter in `push_to_hub`.
CC: @Wauplin | true | 2023-12-21T09:36:11Z | 2023-12-21T14:49:47Z | 2023-12-21T14:43:35Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6520 | 2023-12-21T14:43:35Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6520 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6520). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,050,759,824 | 6,519 | Support push_to_hub canonical datasets | closed | Support `push_to_hub` canonical datasets.
This is necessary in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet
Note that before this PR, the `repo_id` "dataset_name" was transformed to "user/dataset_name". This behavior was introduced by:
... | true | 2023-12-20T15:16:45Z | 2023-12-21T14:48:20Z | 2023-12-21T14:40:57Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6519 | 2023-12-21T14:40:57Z | 4 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6519 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6519). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"nice catch @albertvillanova ",
"@huggingface/datasets this PR is ready for review.",
... |
2,050,137,038 | 6,518 | fix get_metadata_patterns function args error | closed | Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517 | true | 2023-12-20T09:06:22Z | 2023-12-21T15:14:17Z | 2023-12-21T15:07:57Z | d710055071 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6518 | 2023-12-21T15:07:57Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6518 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6518). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"hello!\r\n@albertvillanova \r\nThank you very much for your recognition。\r\nWhen can t... |
2,050,121,588 | 6,517 | Bug get_metadata_patterns arg error | closed | https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69
metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config) | true | 2023-12-20T08:56:44Z | 2023-12-22T00:24:23Z | 2023-12-22T00:24:23Z | d710055071 | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6517 | false | [] |
2,050,033,322 | 6,516 | Support huggingface-hub pre-releases | closed | Support `huggingface-hub` pre-releases.
This way we will have our CI green when testing `huggingface-hub` release candidates. See: https://github.com/huggingface/datasets/tree/ci-test-huggingface-hub-v0.20.0.rc1
Close #6513. | true | 2023-12-20T07:52:29Z | 2023-12-20T08:51:34Z | 2023-12-20T08:44:44Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6516 | 2023-12-20T08:44:44Z | 2 | 2 | 2 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6516 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6516). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,049,724,251 | 6,515 | Why call http_head() when fsspec_head() succeeds | closed | https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14 | true | 2023-12-20T02:25:51Z | 2023-12-26T05:35:46Z | 2023-12-26T05:35:46Z | d710055071 | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6515 | false | [] |
2,049,600,663 | 6,514 | Cache backward compatibility with 2.15.0 | closed | ...for datasets without scripts
It takes into account the changes in cache from
- https://github.com/huggingface/datasets/pull/6493: switch to `config/version/commit_sha` schema
- https://github.com/huggingface/datasets/pull/6454: fix `DataFilesDict` keys ordering when hashing
requires https://github.com/huggin... | true | 2023-12-19T23:52:25Z | 2023-12-21T21:14:11Z | 2023-12-21T21:07:55Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6514 | 2023-12-21T21:07:55Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6514 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6514). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"it's hard to tell if this works as expected without a test but i guess it's not trivial... |
2,048,869,151 | 6,513 | Support huggingface-hub 0.20.0 | closed | CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1
We need to merge:
- #6510
- #6512
- #6516 | true | 2023-12-19T15:15:46Z | 2023-12-20T08:44:45Z | 2023-12-20T08:44:45Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6513 | false | [] |
2,048,795,819 | 6,512 | Remove deprecated HfFolder | closed | ...and use `huggingface_hub.get_token()` instead | true | 2023-12-19T14:40:49Z | 2023-12-19T20:21:13Z | 2023-12-19T20:14:30Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6512 | 2023-12-19T20:14:30Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6512 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,048,465,958 | 6,511 | Implement get dataset default config name | closed | Implement `get_dataset_default_config_name`.
Now that we support setting a configuration as default in `push_to_hub` (see #6500), we need a programmatically way to know in advance which is the default configuration. This will be used in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/a... | true | 2023-12-19T11:26:19Z | 2023-12-21T14:48:57Z | 2023-12-21T14:42:41Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6511 | 2023-12-21T14:42:40Z | 3 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6511 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets, this PR is ready for review.",
"<details>\n<summary>Show bench... |
2,046,928,742 | 6,510 | Replace `list_files_info` with `list_repo_tree` in `push_to_hub` | closed | Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910) | true | 2023-12-18T15:34:19Z | 2023-12-19T18:05:47Z | 2023-12-19T17:58:34Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6510 | 2023-12-19T17:58:34Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6510 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI errors are unrelated to the changes, so I'm merging.",
"<details>\n<summary>Show b... |
2,046,720,869 | 6,509 | Better cast error when generating dataset | closed | I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA
Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ?
New:
```python
Traceback (most recent call last):
File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py... | true | 2023-12-18T13:57:24Z | 2023-12-19T09:37:12Z | 2023-12-19T09:31:03Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6509 | 2023-12-19T09:31:03Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6509 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I created `DatatasetGenerationCastError` in `exceptions.py` that inherits from `Dataset... |
2,045,733,273 | 6,508 | Read GeoParquet files using parquet reader | closed | Let GeoParquet files with the file extension `*.geoparquet` or `*.gpq` be readable by the default parquet reader.
Those two file extensions are the ones most commonly used for GeoParquet files, and is included in the `gpq` validator tool at https://github.com/planetlabs/gpq/blob/e5576b4ee7306b4d2259d56c879465a9364da... | true | 2023-12-18T04:50:37Z | 2024-01-26T18:22:35Z | 2024-01-26T16:18:41Z | weiji14 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6508 | 2024-01-26T16:18:41Z | 13 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6508 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6508). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! Do you mind writing a test using a geoparquet file in `tests/io/test_parquet.py`... |
2,045,152,928 | 6,507 | where is glue_metric.py> @Frankie123421 what was the resolution to this? | closed | > @Frankie123421 what was the resolution to this?
use glue_metric.py instead of glue.py in load_metric
_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_ | true | 2023-12-17T09:58:25Z | 2023-12-18T11:42:49Z | 2023-12-18T11:42:49Z | Mcccccc1024 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6507 | false | [] |
2,044,975,038 | 6,506 | Incorrect test set labels for RTE and CoLA datasets via load_dataset | closed | ### Describe the bug
The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1.
Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the t... | true | 2023-12-16T22:06:08Z | 2023-12-21T09:57:57Z | 2023-12-21T09:57:57Z | emreonal11 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6506 | false | [
"As this is a specific issue of the \"glue\" dataset, I have transferred it to the dataset Discussion page: https://huggingface.co/datasets/glue/discussions/15\r\n\r\nLet's continue the discussion there!"
] |
2,044,721,288 | 6,505 | Got stuck when I trying to load a dataset | open | ### Describe the bug
Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records.
Here is my code:
from datasets import load_dataset
dataset = load_dataset('json', data_files='mypath/oaast_r... | true | 2023-12-16T11:51:07Z | 2024-12-24T16:45:52Z | null | yirenpingsheng | NONE | null | null | 7 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6505 | false | [
"I ran into the same problem when I used a server cluster (Slurm system managed) that couldn't load any of the huggingface datasets or models, but it worked on my laptop. I suspected some system configuration-related problem, but I had no idea. \r\nMy problems are consistent with [issue #2618](https://github.com/hu... |
2,044,541,154 | 6,504 | Error Pushing to Hub | closed | ### Describe the bug
Error when trying to push a dataset in a special format to hub
### Steps to reproduce the bug
```
import datasets
from datasets import Dataset
dataset_dict = {
"filename": ["apple", "banana"],
"token": [[[1,2],[3,4]],[[1,2],[3,4]]],
"label": [0, 1],
}
dataset = Dataset.from_d... | true | 2023-12-16T01:05:22Z | 2023-12-16T06:20:53Z | 2023-12-16T06:20:53Z | Jiayi-Pan | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6504 | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.