id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,163,799,868 | 6,707 | Silence ruff deprecation messages | closed | true | 2024-03-01T16:52:29Z | 2024-03-01T17:32:14Z | 2024-03-01T17:25:46Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6707 | 2024-03-01T17:25:46Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6707 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6707). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,163,783,123 | 6,706 | Update ruff | closed | true | 2024-03-01T16:44:58Z | 2024-03-01T17:02:13Z | 2024-03-01T16:52:17Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6706 | 2024-03-01T16:52:17Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6706 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6706). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,163,768,640 | 6,705 | Fix data_files when passing data_dir | closed | This code should not return empty data files
```python
from datasets import load_dataset_builder
revision = "3d406e70bc21c3ca92a9a229b4c6fc3ed88279fd"
b = load_dataset_builder("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", revision=revision)
print(b.config.data_files)
```
Previously it would ret... | true | 2024-03-01T16:38:53Z | 2024-03-01T18:59:06Z | 2024-03-01T18:52:49Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6705 | 2024-03-01T18:52:49Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6705 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6705). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,163,752,391 | 6,704 | Improve default patterns resolution | closed | Separate the default patterns that match directories from the ones matching files and ensure directories are checked first (reverts the change from https://github.com/huggingface/datasets/pull/6244, which merged these patterns). Also, ensure that the glob patterns do not overlap to avoid duplicates in the result.
A... | true | 2024-03-01T16:31:25Z | 2024-04-23T09:43:09Z | 2024-03-15T15:22:03Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6704 | 2024-03-15T15:22:03Z | 11 | 1 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6704 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6704). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Awesome !\r\n\r\nNote that it can still create duplicates if a path matches several dir... |
2,163,250,590 | 6,703 | Unable to load dataset that was saved with `save_to_disk` | closed | ### Describe the bug
I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.
### Steps to reproduce the bug
1. Save a dataset with `save_to_disk`
2. Try to load it with `load_datasets`
### Expected behavior
I am ab... | true | 2024-03-01T11:59:56Z | 2024-03-04T13:46:20Z | 2024-03-04T13:46:20Z | casper-hansen | NONE | null | null | 8 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6703 | false | [
"`save_to_disk` uses a special serialization that can only be read using `load_from_disk`.\r\n\r\nContrary to `load_dataset`, `load_from_disk` directly loads Arrow files and uses the dataset directory as cache.\r\n\r\nOn the other hand `load_dataset` does a conversion step to get Arrow files from the raw data files... |
2,161,938,484 | 6,702 | Push samples to dataset on hub without having the dataset locally | closed | ### Feature request
Say I have the following code:
```
from datasets import Dataset
import pandas as pd
new_data = {
"column_1": ["value1", "value2"],
"column_2": ["value3", "value4"],
}
df_new = pd.DataFrame(new_data)
dataset_new = Dataset.from_pandas(df_new)
# add these samples to a remote datase... | true | 2024-02-29T19:17:12Z | 2024-03-08T21:08:38Z | 2024-03-08T21:08:38Z | jbdel | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6702 | false | [
"Hi ! For now I would recommend creating a new Parquet file using `dataset_new.to_parquet()` and upload it to HF using `huggingface_hub` every time you get a new batch of data. You can name the Parquet files `0000.parquet`, `0001.parquet`, etc.\r\n\r\nThough maybe make sure to not upload one file per sample since t... |
2,161,448,017 | 6,701 | Base parquet batch_size on parquet row group size | closed | This allows to stream datasets like [Major-TOM/Core-S2L2A](https://huggingface.co/datasets/Major-TOM/Core-S2L2A) which have row groups with few rows (one row is ~10MB). Previously the cold start would take a lot of time and OOM because it would download many row groups before yielding the first example.
I tried on O... | true | 2024-02-29T14:53:01Z | 2024-02-29T15:15:18Z | 2024-02-29T15:08:55Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6701 | 2024-02-29T15:08:55Z | 2 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6701 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6701). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,158,871,038 | 6,700 | remove_columns is not in-place but the doc shows it is in-place | closed | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
h... | true | 2024-02-28T12:36:22Z | 2024-04-02T17:15:28Z | 2024-04-02T17:15:28Z | shelfofclub | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6700 | false | [
"Good catch! I've opened a PR with a fix in the `transformers` repo.",
"@mariosasko Thanks!\r\n\r\nWill the doc of `datasets` be updated?\r\n\r\nI find some possible mistakes in doc about whether `remove_columns` is in-place.\r\n1. [You can also remove a column using map() with remove_columns but the present meth... |
2,158,152,341 | 6,699 | `Dataset` unexpected changed dict data and may cause error | open | ### Describe the bug
Will unexpected get keys with `None` value in the parsed json dict.
### Steps to reproduce the bug
```jsonl test.jsonl
{"id": 0, "indexs": {"-1": [0, 10]}}
{"id": 1, "indexs": {"-1": [0, 10]}}
```
```python
dataset = Dataset.from_json('.test.jsonl')
print(dataset[0])
```
Result:
```... | true | 2024-02-28T05:30:10Z | 2024-02-28T19:14:36Z | null | scruel | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6699 | false | [
"If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn... |
2,157,752,392 | 6,698 | Faster `xlistdir` | closed | Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths. | true | 2024-02-27T22:55:08Z | 2024-02-27T23:44:49Z | 2024-02-27T23:38:14Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6698 | 2024-02-27T23:38:14Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6698 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6698). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI failure is unrelated to the changes.",
"<details>\n<summary>Show benchmarks</summa... |
2,157,322,224 | 6,697 | Unable to Load Dataset in Kaggle | closed | ### Describe the bug
Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook.
Get this Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recen... | true | 2024-02-27T18:19:34Z | 2024-02-29T17:32:42Z | 2024-02-29T17:32:41Z | vrunm | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6697 | false | [
"FWIW, I run `load_dataset(\"llm-blender/mix-instruct\")` and it ran successfully.\r\nCan you clear your cache and try again?\r\n\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.17.0\r\n- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- `huggingface_hub` versi... |
2,154,161,357 | 6,696 | Make JSON builder support an array of strings | closed | Support JSON file with an array of strings.
Fix #6695. | true | 2024-02-26T13:18:31Z | 2024-02-28T06:45:23Z | 2024-02-28T06:39:12Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6696 | 2024-02-28T06:39:12Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6696 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6696). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,154,075,509 | 6,695 | Support JSON file with an array of strings | closed | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | true | 2024-02-26T12:35:11Z | 2024-03-08T14:16:25Z | 2024-02-28T06:39:13Z | albertvillanova | MEMBER | null | null | 1 | 1 | 0 | 1 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6695 | false | [
"https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, bu... |
2,153,086,984 | 6,694 | __add__ for Dataset, IterableDataset | open | It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.`
```python
from datasets import load_dataset
bookcorpus = load_dataset("bookc... | true | 2024-02-26T01:46:55Z | 2024-02-29T16:52:58Z | null | oh-gnues-iohc | NONE | https://github.com/huggingface/datasets/pull/6694 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6694 | true | [
"Hi! You can find a reason why we are against this feature in https://github.com/huggingface/datasets/issues/3449. \r\n\r\n> It's too cumbersome to write this command every time we perform a dataset merging operation\r\n\r\nExplicit is better than implicit, so this isn't a good enough reason. \r\n\r\nThanks for the... |
2,152,887,712 | 6,693 | Update the print message for chunked_dataset in process.mdx | closed | Update documentation to align with `Dataset.__repr__` change after #423 | true | 2024-02-25T18:37:07Z | 2024-02-25T19:57:12Z | 2024-02-25T19:51:02Z | gzbfgjf2 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6693 | 2024-02-25T19:51:02Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6693 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6693). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,152,270,987 | 6,692 | Enhancement: Enable loading TSV files in load_dataset() | closed | Fix #6691 | true | 2024-02-24T11:38:59Z | 2024-02-26T15:33:50Z | 2024-02-26T07:14:03Z | harsh1504660 | NONE | https://github.com/huggingface/datasets/pull/6692 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6692 | true | [
"Hi @harsh1504660,\r\n\r\nThanks for your work, but this functionality already exists. See my comment in the corresponding issue: https://github.com/huggingface/datasets/issues/6691#issuecomment-1963449923\r\n\r\nNext time you would like to contribute, I would suggest you take on an issue that is previously validat... |
2,152,134,041 | 6,691 | load_dataset() does not support tsv | closed | ### Feature request
the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values).
### Motivation
cant easily load files of type tsv, have to convert them to another type like csv then load
### Your contribution
Can try by raising a PR with a little help, c... | true | 2024-02-24T05:56:04Z | 2024-02-26T07:15:07Z | 2024-02-26T07:09:35Z | dipsivenkatesh | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6691 | false | [
"#self-assign",
"Hi @dipsivenkatesh,\r\n\r\nPlease note that this functionality is already implemented. Our CSV builder uses `pandas.read_csv` under the hood, and you can pass the parameter `delimiter=\"\\t\"` to read TSV files.\r\n\r\nSee the list of CSV config parameters in our docs: https://huggingface.co/docs... |
2,150,800,065 | 6,690 | Add function to convert a script-dataset to Parquet | closed | Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet" | true | 2024-02-23T10:28:20Z | 2024-04-12T15:27:05Z | 2024-04-12T15:27:05Z | albertvillanova | MEMBER | null | null | 0 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6690 | false | [] |
2,149,581,147 | 6,689 | .load_dataset() method defaults to zstandard | closed | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it ... | true | 2024-02-22T17:39:27Z | 2024-03-07T14:54:16Z | 2024-03-07T14:54:15Z | ElleLeonne | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6689 | false | [
"The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n\r\nThat's why it asks for zstandard to be installed.\r\n\r\nThough I'm intrigued that you manage to load the dataset without zstandard installed. May... |
2,148,609,859 | 6,688 | Tensor type (e.g. from `return_tensors`) ignored in map | open | ### Describe the bug
I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument.
If this is an expected behaviour (e.g., fo... | true | 2024-02-22T09:27:57Z | 2024-02-22T15:56:21Z | null | srossi93 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6688 | false | [
"Hi, this is expected behavior since all the tensors are converted to Arrow data (the storage type behind a Dataset).\r\n\r\nTo get pytorch tensors back, you can set the dataset format to \"torch\":\r\n\r\n```python\r\nds = ds.with_format(\"torch\")\r\n```",
"Thanks. Just one additional question. During the pipel... |
2,148,554,178 | 6,687 | fsspec: support fsspec>=2023.12.0 glob changes | closed | - adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound
Should close #6644
Should close #6645
The `test_data_files` glob/pattern tests pass for me in:
- `fsspec==2023.10.0` (the pinned max version in datasets `main`)
- `fsspec==2023.12.0` (#6644)
- `fsspec... | true | 2024-02-22T08:59:32Z | 2024-03-04T12:59:42Z | 2024-02-29T15:12:17Z | pmrowla | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6687 | 2024-02-29T15:12:17Z | 7 | 5 | 0 | 5 | false | false | [] | https://github.com/huggingface/datasets/pull/6687 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6687). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Looking into the CI failure, this PR is incompatible with `huggingface-hub>=0.20.0`. It... |
2,147,795,103 | 6,686 | Question: Is there any way for uploading a large image dataset? | open | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si... | true | 2024-02-21T22:07:21Z | 2024-05-02T03:44:59Z | null | zhjohnchan | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6686 | false | [
"```\r\nimport pandas as pd\r\nfrom datasets import Dataset, Image\r\n\r\n# Read the CSV file\r\ndata = pd.read_csv(\"XXXX.csv\")\r\n\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_pandas(data)\r\ndataset = dataset.cast_column(\"file_name\", Image())\r\n\r\n# Upload to Hugging Face Hub (make sure auth... |
2,145,570,006 | 6,685 | Updated Quickstart Notebook link | closed | Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb) | true | 2024-02-21T01:04:18Z | 2024-03-12T21:31:04Z | 2024-02-25T18:48:08Z | Codeblockz | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6685 | 2024-02-25T18:48:08Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6685 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6685). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,144,092,388 | 6,684 | Improve error message for gated datasets on load | closed | Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029 | true | 2024-02-20T10:51:27Z | 2024-02-20T15:40:52Z | 2024-02-20T15:33:56Z | lewtun | MEMBER | https://github.com/huggingface/datasets/pull/6684 | 2024-02-20T15:33:56Z | 7 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6684 | true | [
"Thank you ! Should we also add the link to the dataset page ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thank you ! Should... |
2,142,751,955 | 6,683 | Fix imagefolder dataset url | closed | true | 2024-02-19T16:26:51Z | 2024-02-19T17:24:25Z | 2024-02-19T17:18:10Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6683 | 2024-02-19T17:18:10Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6683 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,142,000,800 | 6,682 | Update GitHub Actions to Node 20 | closed | Update GitHub Actions to Node 20.
Fix #6679. | true | 2024-02-19T10:10:50Z | 2024-02-28T07:02:40Z | 2024-02-28T06:56:34Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6682 | 2024-02-28T06:56:34Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6682 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6682). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,141,985,239 | 6,681 | Update release instructions | closed | Update release instructions. | true | 2024-02-19T10:03:08Z | 2024-02-28T07:23:49Z | 2024-02-28T07:17:22Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6681 | 2024-02-28T07:17:22Z | 2 | 0 | 0 | 0 | false | false | [
"maintenance"
] | https://github.com/huggingface/datasets/pull/6681 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6681). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,141,979,527 | 6,680 | Set dev version | closed | true | 2024-02-19T10:00:31Z | 2024-02-19T10:06:43Z | 2024-02-19T10:00:40Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6680 | 2024-02-19T10:00:40Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6680 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6680). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,141,953,981 | 6,679 | Node.js 16 GitHub Actions are deprecated | closed | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
> Node.js 16 actions are deprecat... | true | 2024-02-19T09:47:37Z | 2024-02-28T06:56:35Z | 2024-02-28T06:56:35Z | albertvillanova | MEMBER | null | null | 0 | 1 | 1 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/6679 | false | [] |
2,141,902,154 | 6,678 | Release: 2.17.1 | closed | true | 2024-02-19T09:24:29Z | 2024-02-19T10:03:00Z | 2024-02-19T09:56:52Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6678 | 2024-02-19T09:56:52Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6678 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6678). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,141,244,167 | 6,677 | Pass through information about location of cache directory. | closed | If cache directory is set, information is not passed through.
Pass download config in as an arg too. | true | 2024-02-18T23:48:57Z | 2024-02-28T18:57:39Z | 2024-02-28T18:51:15Z | stridge-cruxml | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6677 | 2024-02-28T18:51:15Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6677 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6677). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,140,648,619 | 6,676 | Can't Read List of JSON Files Properly | open | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError... | true | 2024-02-17T22:58:15Z | 2024-03-02T20:47:22Z | null | lordsoffallen | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6676 | false | [
"Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?",
"I don't think we should filter for `*.json` as this might silently rem... |
2,139,640,381 | 6,675 | Allow image model (color conversion) to be specified as part of datasets Image() decode | closed | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.or... | true | 2024-02-16T23:43:20Z | 2024-03-18T15:41:34Z | 2024-03-18T15:41:34Z | rwightman | NONE | null | null | 1 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6675 | false | [
"It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.ca... |
2,139,595,576 | 6,674 | Depprcated Overview.ipynb Link to new Quickstart Notebook invalid | closed | ### Describe the bug
For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken.
### Steps to reproduce the bug
Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quicksta... | true | 2024-02-16T22:51:35Z | 2024-02-25T18:48:09Z | 2024-02-25T18:48:09Z | Codeblockz | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6674 | false | [
"Good catch! Feel free to open a PR to fix the link."
] |
2,139,522,827 | 6,673 | IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True` | closed | ### Describe the bug
When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes.
PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does ... | true | 2024-02-16T21:38:12Z | 2024-07-01T17:45:31Z | 2024-07-01T17:45:31Z | rwightman | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug",
"streaming"
] | https://github.com/huggingface/datasets/issues/6673 | false | [] |
2,138,732,288 | 6,672 | Remove deprecated verbose parameter from CSV builder | closed | Remove deprecated `verbose` parameter from CSV builder.
Note that the `verbose` parameter is deprecated since pandas 2.2.0. See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450
Fix #6671. | true | 2024-02-16T14:26:21Z | 2024-02-19T09:26:34Z | 2024-02-19T09:20:22Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6672 | 2024-02-19T09:20:22Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6672 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I am merging this PR (so that it is included in the next patch release) to remove the d... |
2,138,727,870 | 6,671 | CSV builder raises deprecation warning on verbose parameter | closed | CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450 | true | 2024-02-16T14:23:46Z | 2024-02-19T09:20:23Z | 2024-02-19T09:20:23Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6671 | false | [] |
2,138,372,958 | 6,670 | ValueError | closed | ### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transf... | true | 2024-02-16T11:05:17Z | 2024-02-17T04:26:34Z | 2024-02-16T14:43:53Z | prashanth19bolukonda | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6670 | false | [
"Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923",
"Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggin... |
2,138,322,662 | 6,669 | attribute error when writing trainer.train() | closed | ### Describe the bug
AttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore... | true | 2024-02-16T10:40:49Z | 2024-03-01T10:58:00Z | 2024-02-29T17:25:17Z | prashanth19bolukonda | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6669 | false | [
"Hi! Kaggle notebooks use an outdated version of `datasets`, so you should update the `datasets` installation (with `!pip install -U datasets`) to avoid the error.",
"Thank you for your response\r\n\r\nOn Thu, Feb 29, 2024 at 10:55 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Closed #6669 <https://github.com/hu... |
2,137,859,935 | 6,668 | Chapter 6 - Issue Loading `cnn_dailymail` dataset | open | ### Describe the bug
So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code:
`dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")`
Error Message:
```
---------------------------------------------------------------------------
ValueError Tracebac... | true | 2024-02-16T04:40:56Z | 2024-02-16T04:40:56Z | null | hariravichandran | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6668 | false | [] |
2,137,769,552 | 6,667 | Default config for squad is incorrect | open | ### Describe the bug
If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say;
ValueError: Couldn't find cache for squad for config 'default'... | true | 2024-02-16T02:36:55Z | 2024-02-23T09:10:00Z | null | kiddyboots216 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6667 | false | [
"you can try: pip install datasets==2.16.1"
] |
2,136,136,425 | 6,665 | Allow SplitDict setitem to replace existing SplitInfo | closed | Fix this code provided by @clefourrier
```python
import datasets
import os
token = os.getenv("TOKEN")
results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)
results["test"] = datasets.Dataset.from_list([row for row in resu... | true | 2024-02-15T10:17:08Z | 2024-03-01T16:02:46Z | 2024-03-01T15:56:38Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6665 | 2024-03-01T15:56:38Z | 2 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6665 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6665). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,135,483,978 | 6,664 | Revert the changes in `arrow_writer.py` from #6636 | closed | #6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663.
Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column. | true | 2024-02-15T01:47:33Z | 2024-02-16T14:02:39Z | 2024-02-16T02:31:11Z | bryant1410 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6664 | 2024-02-16T02:31:11Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6664 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6664). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Hi! We can't revert this as the \"reverted\" implementation has quadratic time comple... |
2,135,480,811 | 6,663 | `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` | closed | ### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with any... | true | 2024-02-15T01:43:27Z | 2024-02-16T09:25:00Z | 2024-02-16T09:25:00Z | bryant1410 | CONTRIBUTOR | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6663 | false | [
"Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.",
"> Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.\r\n\r\nI feel that'd be good, but it'd be great to release a hotfix ASAP (a re... |
2,132,425,812 | 6,662 | fix: show correct package name to install biopython | closed | When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error:
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("InstaDeepAI/multi_species_genomes")
/home/j.vangoey/.pyenv/versions/m... | true | 2024-02-13T14:15:04Z | 2024-03-01T17:49:48Z | 2024-03-01T17:43:39Z | BioGeek | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6662 | 2024-03-01T17:43:39Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6662 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6662). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,132,296,267 | 6,661 | Import error on Google Colab | closed | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import dataset... | true | 2024-02-13T13:12:40Z | 2024-02-25T16:37:54Z | 2024-02-14T08:04:47Z | kithogue | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6661 | false | [
"Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert t... |
2,131,977,011 | 6,660 | Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes | closed | This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example:
```python
from ... | true | 2024-02-13T10:24:33Z | 2024-03-01T19:01:57Z | 2024-03-01T18:52:37Z | mohalisad | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6660 | 2024-03-01T18:52:37Z | 2 | 3 | 3 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6660 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6660). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,129,229,810 | 6,659 | Change default compression argument for JsonDatasetWriter | closed | Change default compression type from `None` to "infer", to align with pandas' defaults.
Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.... | true | 2024-02-11T23:49:07Z | 2024-03-01T17:51:50Z | 2024-03-01T17:44:55Z | Rexhaif | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6659 | 2024-03-01T17:44:55Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6659 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6659). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Can someone check this out?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArr... |
2,129,158,371 | 6,658 | [Resumable IterableDataset] Add IterableDataset state_dict | closed | A simple implementation of a mechanism to resume an IterableDataset.
It works by restarting at the latest shard and skip samples. It provides fast resuming (though not instantaneous).
Example:
```python
from datasets import Dataset, concatenate_datasets
ds = Dataset.from_dict({"a": range(5)}).to_iterable_d... | true | 2024-02-11T20:35:52Z | 2024-10-01T10:19:38Z | 2024-06-03T19:15:39Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6658 | 2024-06-03T19:15:39Z | 20 | 2 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6658 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6658). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"would be nice to have this feature in the new dataset release!",
"Before finalising t... |
2,129,147,085 | 6,657 | Release not pushed to conda channel | closed | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?
:
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
... | true | 2024-02-09T15:14:21Z | 2024-11-29T10:06:57Z | null | Riccorl | NONE | null | null | 2 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6656 | false | [
"I get similar when dealing with a large jsonl file (6k lines), \r\n\r\n> TypeError: Couldn't cast array of type timestamp[us] to null\r\n\r\nYet when I split it into 1k lines, files, load_dataset works fine!\r\n\r\nhttps://github.com/huggingface/course/issues/692\r\n\r\n",
"What's the proposed solution? :-)"
] |
2,127,020,042 | 6,655 | Cannot load the dataset go_emotions | open | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&l... | true | 2024-02-09T12:15:39Z | 2024-02-12T09:35:55Z | null | arame | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6655 | false | [
"Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wonderin... |
2,126,939,358 | 6,654 | Batched dataset map throws exception that cannot cast fixed length array to Sequence | closed | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 20... | true | 2024-02-09T11:23:19Z | 2024-02-12T08:26:53Z | 2024-02-12T08:26:53Z | keesjandevries | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6654 | false | [
"Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n",
"Amazing! It's indeed fixed now. Thanks!"
] |
2,126,831,929 | 6,653 | Set dev version | closed | true | 2024-02-09T10:12:02Z | 2024-02-09T10:18:20Z | 2024-02-09T10:12:12Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6653 | 2024-02-09T10:12:12Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6653 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6653). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,126,760,798 | 6,652 | Release: 2.17.0 | closed | true | 2024-02-09T09:25:01Z | 2024-02-09T10:11:48Z | 2024-02-09T10:05:35Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6652 | 2024-02-09T10:05:35Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6652 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6652). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,126,649,626 | 6,651 | Slice splits support for datasets.load_from_disk | open | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogeniz... | true | 2024-02-09T08:00:21Z | 2024-06-14T14:42:46Z | null | mhorlacher | NONE | null | null | 0 | 7 | 7 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6651 | false | [] |
2,125,680,991 | 6,650 | AttributeError: 'InMemoryTable' object has no attribute '_batches' | open | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.... | true | 2024-02-08T17:11:26Z | 2024-02-21T00:34:41Z | null | matsuobasho | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6650 | false | [
"Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```",
"No, it doesn't, ... |
2,124,940,213 | 6,649 | Minor multi gpu doc improvement | closed | just added torch.no_grad and eval() | true | 2024-02-08T11:17:24Z | 2024-02-08T11:23:35Z | 2024-02-08T11:17:35Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6649 | 2024-02-08T11:17:35Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6649 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6649). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,124,813,589 | 6,648 | Document usage of hfh cli instead of git | closed | (basically the same content as the hfh upload docs, but adapted for datasets) | true | 2024-02-08T10:24:56Z | 2024-02-08T13:57:41Z | 2024-02-08T13:51:39Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6648 | 2024-02-08T13:51:39Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6648 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,123,397,569 | 6,647 | Update loading.mdx to include "jsonl" file loading. | open | * A small update to the documentation, noting the ability to load jsonl files. | true | 2024-02-07T16:18:08Z | 2024-02-08T15:34:17Z | null | mosheber | NONE | https://github.com/huggingface/datasets/pull/6647 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6647 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6647). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Thanks for adding the explicit loading command.\r\n> \r\n> However, I would move it j... |
2,123,134,128 | 6,646 | Better multi-gpu example | closed | Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU
the previous example was using a model for translation and the way it was setup was not really the right way to use the model. | true | 2024-02-07T14:15:01Z | 2024-02-09T17:43:32Z | 2024-02-07T14:59:11Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6646 | 2024-02-07T14:59:11Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6646 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6646). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,122,956,818 | 6,645 | Support fsspec 2024.2 | closed | Support fsspec 2024.2.
First, we should address:
- #6644 | true | 2024-02-07T12:45:29Z | 2024-02-29T15:12:19Z | 2024-02-29T15:12:19Z | albertvillanova | MEMBER | null | null | 1 | 8 | 8 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6645 | false | [
"I'd be very grateful. This upper bound banished me straight into dependency hell today. :("
] |
2,122,955,282 | 6,644 | Support fsspec 2023.12 | closed | Support fsspec 2023.12 by handling previous and new glob behavior. | true | 2024-02-07T12:44:39Z | 2024-02-29T15:12:18Z | 2024-02-29T15:12:18Z | albertvillanova | MEMBER | null | null | 1 | 6 | 6 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6644 | false | [
"The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec rel... |
2,121,239,039 | 6,643 | Faiss GPU index cannot be serialised when passed to trainer | open | ### Describe the bug
I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration:
1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error:
```
... | true | 2024-02-06T16:41:00Z | 2024-02-15T10:29:32Z | null | rubenweitzman | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6643 | false | [
"Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)",
"Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove... |
2,119,085,766 | 6,642 | Differently dataset object saved than it is loaded. | closed | ### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_met... | true | 2024-02-05T17:28:57Z | 2024-02-06T09:50:19Z | 2024-02-06T09:50:19Z | MFajcik | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6642 | false | [
"I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` co... |
2,116,963,132 | 6,641 | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | closed | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test datase... | true | 2024-02-04T08:49:31Z | 2024-02-06T09:26:07Z | 2024-02-06T09:11:45Z | Hughhuh | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6641 | false | [
"Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the informatio... |
2,115,864,531 | 6,640 | Sign Language Support | open | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signe... | true | 2024-02-02T21:54:51Z | 2024-02-02T21:54:51Z | null | Merterm | NONE | null | null | 0 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6640 | false | [] |
2,114,620,200 | 6,639 | Run download_and_prepare if missing splits | open | A first step towards https://github.com/huggingface/datasets/issues/6529 | true | 2024-02-02T10:36:49Z | 2024-02-06T16:54:22Z | null | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6639 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6639 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,113,329,257 | 6,638 | Cannot download wmt16 dataset | closed | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra... | true | 2024-02-01T19:41:42Z | 2024-02-01T20:07:29Z | 2024-02-01T20:07:29Z | vidyasiv | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6638 | false | [
"Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\... |
2,113,025,975 | 6,637 | 'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets | open | ### Describe the bug
If you:
1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset
2. Set the output format to torch tensors with .with_format('torch')
Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch... | true | 2024-02-01T17:16:54Z | 2024-02-05T10:43:47Z | null | tobycrisford | NONE | null | null | 1 | 4 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6637 | false | [
"The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `Buffer... |
2,110,781,097 | 6,636 | Faster column validation and reordering | closed | I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit ... | true | 2024-01-31T19:08:28Z | 2024-02-07T19:39:00Z | 2024-02-06T23:03:38Z | psmyth94 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6636 | 2024-02-06T23:03:38Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6636 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6636). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @mariosasko, I made the changes. However, I did some tests with `map` and I stil... |
2,110,659,519 | 6,635 | Fix missing info when loading some datasets from Parquet export | closed | Fix getting the info for script-based datasets with Parquet export with a single config not named "default".
E.g.
```python
from datasets import load_dataset_builder
b = load_dataset_builder("bookcorpus")
print(b.info.features)
# should print {'text': Value(dtype='string', id=None)}
```
I fixed this by ... | true | 2024-01-31T17:55:21Z | 2024-02-07T16:48:55Z | 2024-02-07T16:41:04Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6635 | 2024-02-07T16:41:04Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6635 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6635). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,110,242,376 | 6,634 | Support data_dir parameter in push_to_hub | closed | Support `data_dir` parameter in `push_to_hub`.
This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en". | true | 2024-01-31T14:37:36Z | 2024-02-05T10:32:49Z | 2024-02-05T10:26:40Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6634 | 2024-02-05T10:26:40Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6634 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets, feel free to review this PR so that it can be included in the ne... |
2,110,124,475 | 6,633 | dataset viewer requires no-script | closed | true | 2024-01-31T13:41:54Z | 2024-01-31T14:05:04Z | 2024-01-31T13:59:01Z | severo | COLLABORATOR | https://github.com/huggingface/datasets/pull/6633 | 2024-01-31T13:59:01Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6633 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6633). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,108,541,678 | 6,632 | Fix reload cache with data dir | closed | The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`)
I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config... | true | 2024-01-30T18:52:23Z | 2024-02-06T17:27:35Z | 2024-02-06T17:21:24Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6632 | 2024-02-06T17:21:24Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6632 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6632). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,107,802,473 | 6,631 | Fix filelock: use current umask for filelock >= 3.10 | closed | reported in https://github.com/huggingface/evaluate/issues/542
cc @stas00 @williamberrios
close https://github.com/huggingface/datasets/issues/6589 | true | 2024-01-30T12:56:01Z | 2024-01-30T15:34:49Z | 2024-01-30T15:28:37Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6631 | 2024-01-30T15:28:37Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6631 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6631). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,106,478,275 | 6,630 | Bump max range of dill to 0.3.8 | closed | Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history | true | 2024-01-29T21:35:55Z | 2024-01-30T16:19:45Z | 2024-01-30T15:12:25Z | ringohoffman | NONE | https://github.com/huggingface/datasets/pull/6630 | 2024-01-30T15:12:25Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6630 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6630). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hmm these errors look pretty weird... can they be retried?",
"Hi, thanks for working ... |
2,105,774,482 | 6,629 | Support push_to_hub without org/user to default to logged-in user | closed | This behavior is aligned with:
- the behavior of `datasets` before merging #6519
- the behavior described in the corresponding docstring
- the behavior of `huggingface_hub.create_repo`
Revert "Support push_to_hub canonical datasets (#6519)"
- This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541.
Fix... | true | 2024-01-29T15:36:52Z | 2024-02-05T12:35:43Z | 2024-02-05T12:29:36Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6629 | 2024-02-05T12:29:36Z | 3 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6629 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6629). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets, feel free to review this PR so that it can be included in the ne... |
2,105,760,502 | 6,628 | Make CLI test support multi-processing | closed | Support passing `--num_proc` to CLI test.
This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11 | true | 2024-01-29T15:30:09Z | 2024-02-05T10:29:20Z | 2024-02-05T10:23:13Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6628 | 2024-02-05T10:23:13Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6628 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6628). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@huggingface/datasets, feel free to review this PR so that it can be included in the ne... |
2,105,735,816 | 6,627 | Disable `tqdm` bars in non-interactive environments | closed | Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default).
For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`. | true | 2024-01-29T15:18:21Z | 2024-01-29T15:47:34Z | 2024-01-29T15:41:32Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6627 | 2024-01-29T15:41:32Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6627 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6627). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,105,482,522 | 6,626 | Raise error on bad split name | closed | e.g. dashes '-' are not allowed in split names
This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test
cc @AndreaFrancis | true | 2024-01-29T13:17:41Z | 2024-01-29T15:18:25Z | 2024-01-29T15:12:18Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6626 | 2024-01-29T15:12:18Z | 2 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6626 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6626). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,103,950,718 | 6,624 | How to download the laion-coco dataset | closed | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | true | 2024-01-28T03:56:05Z | 2024-02-06T09:43:31Z | 2024-02-06T09:43:31Z | vanpersie32 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6624 | false | [
"Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it."
] |
2,103,870,123 | 6,623 | streaming datasets doesn't work properly with multi-node | open | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | true | 2024-01-27T23:46:13Z | 2024-10-16T00:55:19Z | null | rohitgr7 | NONE | null | null | 23 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6623 | false | [
"@mariosasko, @lhoestq, @albertvillanova\r\nhey guys! can anyone help? or can you guys suggest who can help with this?",
"Hi ! \r\n\r\n1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. ... |
2,103,780,697 | 6,622 | multi-GPU map does not work | closed | ### Describe the bug
Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y
Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy
Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-min... | true | 2024-01-27T20:06:08Z | 2024-02-08T11:18:21Z | 2024-02-08T11:18:21Z | kopyl | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6622 | false | [
"This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)"
] |
2,103,675,294 | 6,621 | deleted | closed | ... | true | 2024-01-27T16:59:58Z | 2024-01-27T17:14:43Z | 2024-01-27T17:14:43Z | kopyl | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6621 | false | [] |
2,103,110,536 | 6,620 | wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id} | closed | ### Describe the bug
I'm trying to run a rag example, and the dataset is wiki_dpr.
wiki_dpr download and extracting have been completed successfully.
However, at the generating train split stage, an error from wiki_dpr.py keeps popping up.
Especially in "_generate_examples" :
1. The following error occurs in the... | true | 2024-01-27T01:00:09Z | 2024-02-06T09:40:19Z | 2024-02-06T09:40:19Z | kiehls90 | NONE | null | null | 1 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6620 | false | [
"Thanks for reporting, @kiehls90.\r\n\r\nAs this seems an issue with the specific \"wiki_dpr\" dataset, I am transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/wiki_dpr/discussions/13"
] |
2,102,407,478 | 6,619 | Migrate from `setup.cfg` to `pyproject.toml` | closed | Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh` | true | 2024-01-26T15:27:10Z | 2024-01-26T15:53:40Z | 2024-01-26T15:47:32Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6619 | 2024-01-26T15:47:32Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6619 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6619). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,101,868,198 | 6,618 | While importing load_dataset from datasets | closed | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | true | 2024-01-26T09:21:57Z | 2024-07-23T09:31:07Z | 2024-02-06T09:25:54Z | suprith-hub | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6618 | false | [
"Hi! Can you please share the error's stack trace so we can see where it comes from?",
"We cannot reproduce the issue and we do not have enough information: environment info (need to run `datasets-cli env`), stack trace,...\r\n\r\nI am closing the issue. Feel free to reopen it (with additional information) if the... |
2,100,459,449 | 6,617 | Fix CI: pyarrow 15, pandas 2.2 and sqlachemy | closed | this should fix the CI failures on `main`
close https://github.com/huggingface/datasets/issues/5477 | true | 2024-01-25T13:57:41Z | 2024-01-26T14:56:46Z | 2024-01-26T14:50:44Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6617 | 2024-01-26T14:50:44Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6617 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6617). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,100,125,709 | 6,616 | Use schema metadata only if it matches features | closed | e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong | true | 2024-01-25T11:01:14Z | 2024-01-26T16:25:24Z | 2024-01-26T16:19:12Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6616 | 2024-01-26T16:19:12Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6616 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,098,951,409 | 6,615 | ... | closed | ... | true | 2024-01-24T19:37:03Z | 2024-01-24T19:42:30Z | 2024-01-24T19:40:11Z | ftkeys | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6615 | false | [
"Sorry I posted in the wrong repo, please delete.. thanks!"
] |
2,098,884,520 | 6,614 | `datasets/downloads` cleanup tool | open | ### Feature request
Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files
e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:
```
sudo find /data/huggingface/... | true | 2024-01-24T18:52:10Z | 2024-01-24T18:55:09Z | null | stas00 | CONTRIBUTOR | null | null | 0 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6614 | false | [] |
2,098,078,210 | 6,612 | cnn_dailymail repeats itself | closed | ### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339.
Also I che... | true | 2024-01-24T11:38:25Z | 2024-02-01T08:14:50Z | 2024-02-01T08:14:50Z | KeremZaman | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6612 | false | [
"Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou can update `datasets` with\r\n\r\n```\r\npip install -U datasets\r\n```"
] |
2,096,004,858 | 6,611 | `load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError` | open | ### Describe the bug
When loading a large dataset (>1000GB) from S3 I run into the following error:
```
Traceback (most recent call last):
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
return await func(*args, **kwargs)
File "/home/alp/.local/lib/python3.... | true | 2024-01-23T12:37:57Z | 2024-01-23T12:37:57Z | null | zotroneneis | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6611 | false | [] |
2,095,643,711 | 6,610 | cast_column to Sequence(subfeatures_dict) has err | closed | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
... | true | 2024-01-23T09:32:32Z | 2024-01-25T02:15:23Z | 2024-01-25T02:15:23Z | neiblegy | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6610 | false | [
"Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n```python\r\nais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n```",
"> Hi! You are passing the wrong feature type to ... |
2,095,085,650 | 6,609 | Wrong path for cache directory in offline mode | closed | ### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the files and caches them normally.
Nevertheless, ... | true | 2024-01-23T01:47:19Z | 2024-02-06T17:21:25Z | 2024-02-06T17:21:25Z | je-santos | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6609 | false | [
"+1",
"same error in 2.16.1",
"@kongjiellx any luck with the issue?",
"I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets`",
"Thanks @lhoestq !"
] |
2,094,153,292 | 6,608 | Add `with_rank` param to `Dataset.filter` | closed | Fix #6564 | true | 2024-01-22T15:19:16Z | 2024-01-29T16:43:11Z | 2024-01-29T16:36:53Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6608 | 2024-01-29T16:36:53Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6608 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,091,766,063 | 6,607 | Update features.py to avoid bfloat16 unsupported error | closed | Fixes https://github.com/huggingface/datasets/issues/6566
Let me know if there's any tests I need to clear. | true | 2024-01-20T00:39:44Z | 2024-05-17T09:46:29Z | 2024-05-17T09:40:13Z | skaulintel | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6607 | 2024-05-17T09:40:13Z | 3 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6607 | true | [
"I think not all torch tensors should be converted to float, what if it's a tensor of integers for example ?\r\nMaybe you can check for the tensor dtype before converting",
"@lhoestq Please could this be merged? 🙏",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show up... |
2,091,088,785 | 6,606 | Dedicated RNG object for fingerprinting | closed | Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775 | true | 2024-01-19T18:34:47Z | 2024-01-26T15:11:38Z | 2024-01-26T15:05:34Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6606 | 2024-01-26T15:05:34Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6606 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,090,188,376 | 6,605 | ELI5 no longer available, but referenced in example code | closed | Here, an example code is given:
https://huggingface.co/docs/transformers/tasks/language_modeling
This code + article references the ELI5 dataset.
ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5
"Defunct: Dataset "eli5" is defunct and no longer accessible due to u... | true | 2024-01-19T10:21:52Z | 2024-02-01T17:58:23Z | 2024-02-01T17:58:22Z | drdsgvo | NONE | null | null | 1 | 3 | 3 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6605 | false | [
"Addressed in https://github.com/huggingface/transformers/pull/28715."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.