id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,310,107,326 | 6,914 | Preserve JSON column order and support list of strings field | closed | Preserve column order when loading from a JSON file with a list of dict (or with a field containing a list of dicts).
Additionally, support JSON file with a list of strings field.
Fix #6913. | true | 2024-05-22T09:58:54Z | 2024-05-29T13:18:47Z | 2024-05-29T13:12:23Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6914 | 2024-05-29T13:12:23Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6914 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6914). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,309,605,889 | 6,913 | Column order is nondeterministic when loading from JSON | closed | As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects.
For example, when loading a JSON files with a list of objects, each with the following ordered keys:
- [ID, Language, Topic],
the resulting dataset may have column... | true | 2024-05-22T05:30:14Z | 2024-05-29T13:12:24Z | 2024-05-29T13:12:24Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6913 | false | [] |
2,309,365,961 | 6,912 | Add MedImg for streaming | open | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your con... | true | 2024-05-22T00:55:30Z | 2024-09-05T16:53:54Z | null | lhallee | NONE | null | null | 8 | 0 | 0 | 0 | null | false | [
"dataset request"
] | https://github.com/huggingface/datasets/issues/6912 | false | [
"@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?",
"Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamab... |
2,308,152,711 | 6,911 | Remove dead code for non-dict data_files from packaged modules | closed | Remove dead code for non-dict data_files from packaged modules.
Since the merge of this PR:
- #2986
the builders' variable self.config.data_files is always a dict, which makes the condition on (str, list, tuple) dead code. | true | 2024-05-21T12:10:24Z | 2024-05-23T08:05:58Z | 2024-05-23T07:59:57Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6911 | 2024-05-23T07:59:57Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6911 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6911). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,307,570,084 | 6,910 | Fix wrong type hints in data_files | closed | Fix wrong type hints in data_files introduced in:
- #6493 | true | 2024-05-21T07:41:09Z | 2024-05-23T06:04:05Z | 2024-05-23T05:58:05Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6910 | 2024-05-23T05:58:05Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6910 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6910). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,307,508,120 | 6,909 | Update requests >=2.32.1 to fix vulnerability | closed | Update requests >=2.32.1 to fix vulnerability. | true | 2024-05-21T07:11:20Z | 2024-05-21T07:45:58Z | 2024-05-21T07:38:25Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6909 | 2024-05-21T07:38:25Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6909 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6909). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,304,958,116 | 6,908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | closed | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
... | true | 2024-05-20T02:43:59Z | 2024-05-24T10:58:09Z | 2024-05-24T10:58:09Z | guch8017 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6908 | false | [
"I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at makin... |
2,303,855,833 | 6,907 | Support the deserialization of json lines files comprised of lists | open | ### Feature request
I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a v... | true | 2024-05-18T05:07:23Z | 2024-05-18T08:53:28Z | null | umarbutler | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6907 | false | [
"Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new re... |
2,303,679,119 | 6,906 | irc_disentangle - Issue with splitting data | closed | ### Describe the bug
I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message:
ValueError: Instruction "train" corresponds to no data!
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset('irc_disentangle')
ds
#... | true | 2024-05-17T23:19:37Z | 2024-07-16T00:21:56Z | 2024-07-08T06:18:08Z | eor51355 | NONE | null | null | 6 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6906 | false | [
"Thank you I will try this out!\r\n\r\nOn Tue, Jun 11, 2024 at 3:55 AM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I add a \"streaming=True\" after the name of the dataset, and it\r\n> works.....hope it can help you\r\n>\r\n> And if you install the version datasets==2.15.0, this bug will not happen.\r\n> I don't kn... |
2,303,098,587 | 6,905 | Extraction protocol for arrow files is not defined | closed | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_ut... | true | 2024-05-17T16:01:41Z | 2025-02-06T19:50:22Z | 2025-02-06T19:50:20Z | radulescupetru | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6905 | false | [
"Fixed in https://github.com/huggingface/datasets/pull/7083"
] |
2,302,912,179 | 6,904 | Fix decoding multi part extension | closed | e.g. a field named `url.txt` should be a treated as text
I also included a small fix to support .npz correctly | true | 2024-05-17T14:32:57Z | 2024-05-17T14:52:56Z | 2024-05-17T14:46:54Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6904 | 2024-05-17T14:46:54Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6904 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6904). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"takign the liberty to merge this for the viewer and a new dataset being released",
"<... |
2,300,436,053 | 6,903 | Add the option of saving in parquet instead of arrow | open | ### Feature request
In dataset.save_to_disk('/path/to/save/dataset'),
add the option to save in parquet format
dataset.save_to_disk('/path/to/save/dataset', format="parquet"),
because arrow is not used for Production Big data.... (only parquet)
### Motivation
because arrow is not used for Production Big... | true | 2024-05-16T13:35:51Z | 2025-05-19T12:14:14Z | null | arita37 | NONE | null | null | 18 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6903 | false | [
"I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ",
"No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another cus... |
2,300,256,241 | 6,902 | Make CLI convert_to_parquet not raise error if no rights to create script branch | closed | Make CLI convert_to_parquet not raise error if no rights to create "script" branch.
Not that before this PR, the error was not critical because it was raised at the end of the script, once all the rest of the steps were already performed.
Fix #6901.
Bug introduced in datasets-2.19.0 by:
- #6809 | true | 2024-05-16T12:21:27Z | 2024-06-03T04:43:17Z | 2024-05-16T12:51:05Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6902 | 2024-05-16T12:51:04Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6902 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6902). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,300,167,465 | 6,901 | HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos | closed | CLI convert_to_parquet cannot create "script" branch on 3rd party repos.
It can only create it on repos where the user executing the script has write access.
Otherwise, a 403 Forbidden HTTPError is raised:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/ut... | true | 2024-05-16T11:40:22Z | 2024-05-16T12:51:06Z | 2024-05-16T12:51:06Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6901 | false | [] |
2,298,489,733 | 6,900 | [WebDataset] KeyError with user-defined `Features` when a field is missing in an example | closed | reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1
```
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
example[field_name] = {"path": example["_... | true | 2024-05-15T17:48:34Z | 2024-06-28T09:30:13Z | 2024-06-28T09:30:13Z | lhoestq | MEMBER | null | null | 5 | 2 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6900 | false | [
"@lhoestq How difficult of fix is this?",
"It shouldn't be difficult, I think it's just a matter of adding the missing fields from `self.config.features` in `example` here: before it iterates on image_field_names and audio_field_names. A missing field should have a value set to None\r\n\r\nhttps://github.com/hugg... |
2,298,059,597 | 6,899 | List of dictionary features get standardized | open | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | true | 2024-05-15T14:11:35Z | 2025-04-01T20:48:03Z | null | sohamparikh | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6899 | false | [
"I think this may be a limitation of the arrow format",
"Dupe of #5950\n"
] |
2,294,432,108 | 6,898 | Fix YAML error in README files appearing on GitHub | closed | Fix YAML error in README files appearing on GitHub.
See error message:

Fix #6897. | true | 2024-05-14T05:21:57Z | 2024-05-16T14:36:57Z | 2024-05-16T14:28:16Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6898 | 2024-05-16T14:28:16Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6898 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6898). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"After this PR, the README file looks like:\r\n\r\n
2. Observe a big red error at the top
3. The rest of the ... | true | 2024-05-13T17:33:59Z | 2024-05-16T14:28:17Z | 2024-05-16T14:28:17Z | bghira | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6897 | false | [
"Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML erro... |
2,293,176,061 | 6,896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | open | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipyth... | true | 2024-05-13T15:41:57Z | 2025-03-25T01:21:06Z | null | finiteautomata | NONE | null | null | 1 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6896 | false | [
"Same issue here\n"
] |
2,292,993,156 | 6,895 | Document that to_json defaults to JSON Lines | closed | Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring.
Fix #6894. | true | 2024-05-13T14:22:34Z | 2024-05-16T14:37:25Z | 2024-05-16T14:31:26Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6895 | 2024-05-16T14:31:26Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6895 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,292,840,226 | 6,894 | Better document defaults of to_json | closed | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | true | 2024-05-13T13:30:54Z | 2024-05-16T14:31:27Z | 2024-05-16T14:31:27Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"documentation"
] | https://github.com/huggingface/datasets/issues/6894 | false | [] |
2,292,677,439 | 6,893 | Close gzipped files properly | closed | close https://github.com/huggingface/datasets/issues/6877 | true | 2024-05-13T12:24:39Z | 2024-05-13T13:53:17Z | 2024-05-13T13:01:54Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6893 | 2024-05-13T13:01:54Z | 3 | 1 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6893 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6893). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,291,201,347 | 6,892 | Add support for categorical/dictionary types | closed | Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column.
Unfortunately, huggingface datasets currently does n... | true | 2024-05-12T07:15:08Z | 2024-06-07T15:01:39Z | 2024-06-07T12:20:42Z | EthanSteinberg | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6892 | 2024-06-07T12:20:42Z | 3 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6892 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6892). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,291,118,869 | 6,891 | Unable to load JSON saved using `to_json` | closed | ### Describe the bug
Datasets stored in the JSON format cannot be loaded using `json.load()`
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
dataset = load_dataset("squad")
train_dataset, test_dataset = dataset["train"], dataset["validation"]
test_dataset.to_json("full_dataset... | true | 2024-05-12T01:02:51Z | 2024-05-16T14:32:55Z | 2024-05-12T07:02:02Z | DarshanDeshpande | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6891 | false | [
"Hi @DarshanDeshpande,\r\n\r\nPlease note that the default format of the method `Dataset.to_json` is [JSON-Lines](https://jsonlines.org/): it passes `orient=\"records\", lines=True` to `pandas.DataFrame.to_json`. This format is specially useful for large datasets, since unlike regular JSON files, it does not requir... |
2,288,699,041 | 6,890 | add `with_transform` and/or `set_transform` to IterableDataset | open | ### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would ... | true | 2024-05-10T01:00:12Z | 2024-05-10T01:00:46Z | null | not-lain | NONE | null | null | 0 | 4 | 4 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6890 | false | [] |
2,287,720,539 | 6,889 | fix bug #6877 | closed | fix bug #6877 due to maybe f becomes invaild after yield process
the results are below:
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 828/828 [00:01<00:00, 420.41it/s]
Resolving data files: 100%|████████... | true | 2024-05-09T13:38:40Z | 2024-05-13T13:35:32Z | 2024-05-13T13:35:32Z | arthasking123 | NONE | https://github.com/huggingface/datasets/pull/6889 | null | 9 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6889 | true | [
"@loicmagne, @KennethEnevoldsen",
"Can you give more details on why this fix works ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6889). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",... |
2,287,169,676 | 6,888 | Support WebDataset containing file basenames with dots | closed | Support WebDataset containing file basenames with dots.
Fix #6880. | true | 2024-05-09T08:25:30Z | 2024-05-10T13:54:06Z | 2024-05-10T13:54:06Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6888 | null | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6888 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6888). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I think webdataset splits the file name and extension using the first dot no ?\r\n\r\nh... |
2,286,786,396 | 6,887 | FAISS load to None | open | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transf... | true | 2024-05-09T02:43:50Z | 2024-05-16T20:44:23Z | null | brainer3220 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6887 | false | [
"Hello,\r\n\r\nI'm not sure I understand. \r\nThe return value of `ds.load_faiss_index` is None as expected.\r\n\r\nI see that loading an Index on a dataset that doesn't have an `embedding` column doesn't raise an Issue. Is that the issue?\r\n\r\nSo `ds` doesn't have an `embedding` column, but we load an index that... |
2,286,328,984 | 6,886 | load_dataset with data_dir and cache_dir set fail with not supported | open | ### Describe the bug
with python 3.11 I execute:
```py
from transformers import Wav2Vec2Processor, Data2VecAudioModel
import torch
from torch import nn
from datasets import load_dataset, concatenate_datasets
# load demo audio and set processor
dataset_clean = load_dataset("librispeech_asr", "clean", split="... | true | 2024-05-08T19:52:35Z | 2024-05-08T19:58:11Z | null | fah | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6886 | false | [] |
2,285,115,400 | 6,885 | Support jax 0.4.27 in CI tests | closed | Support jax 0.4.27 in CI tests by using jax Array `devices` method instead of `device` (which no longer exists).
Fix #6884. | true | 2024-05-08T09:19:37Z | 2024-05-08T09:43:19Z | 2024-05-08T09:35:16Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6885 | 2024-05-08T09:35:16Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6885 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6885). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,284,839,687 | 6,884 | CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device' | closed | After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error:
```Python traceback
AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'?
```
See: https://github.com/huggingface/datasets/actions/runs/8997488... | true | 2024-05-08T07:01:47Z | 2024-05-08T09:35:17Z | 2024-05-08T09:35:17Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6884 | false | [] |
2,284,808,399 | 6,883 | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset | closed | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset.
The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3
The bug #6881 was introduced in datasets-2.19.0 by this PR:
- #6739... | true | 2024-05-08T06:43:29Z | 2024-08-28T13:13:57Z | 2024-05-16T14:34:02Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6883 | 2024-05-16T14:34:02Z | 10 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6883 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6883). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets ",
... |
2,284,803,158 | 6,882 | Connection Error When Using By-pass Proxies | open | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(M... | true | 2024-05-08T06:40:14Z | 2024-05-17T06:38:30Z | null | MRNOBODY-ZST | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6882 | false | [
"Changing the supplier of the proxy will solve this problem, or you can visit and follow the instructions in https://hf-mirror.com "
] |
2,284,794,009 | 6,881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | closed | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1... | true | 2024-05-08T06:33:57Z | 2024-07-18T06:49:30Z | 2024-05-16T14:34:03Z | albertvillanova | MEMBER | null | null | 3 | 3 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6881 | false | [
"@albertvillanova @lhoestq just ran into it and requiring newer pillow isn't a solution as it breaks Pillow-SIMD which is behind Pillow quite a few versions but necessary for training with reasonable throughput. \r\n\r\nA couple things here... \r\n\r\n1. This can be done with a method that isn't an issue for any so... |
2,283,278,337 | 6,880 | Webdataset: KeyError: 'png' on some datasets when streaming | open | reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloading data: 100%
1.41G/1.41G [00:48<00:00, 17.2MB/s]
Downloading data: 100%
619M/619M [00:11<00:00, 57.4MB/s]
Generating train sp... | true | 2024-05-07T13:09:02Z | 2024-05-14T20:34:05Z | null | lhoestq | MEMBER | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6880 | false | [
"The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `... |
2,282,968,259 | 6,879 | Batched mapping does not raise an error if values for an existing column are empty | open | ### Describe the bug
Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised.
This is not the case if the... | true | 2024-05-07T11:02:40Z | 2024-05-07T11:02:40Z | null | felix-schneider | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6879 | false | [] |
2,282,879,491 | 6,878 | Create function to convert to parquet | closed | Analogously with `delete_from_hub`, this PR:
- creates the Python function `convert_to_parquet`
- makes the corresponding CLI command use that function.
This way, the functionality can be used both from a terminal and from a Python console.
This PR also implements a test for convert_to_parquet function. | true | 2024-05-07T10:27:07Z | 2024-05-16T14:46:44Z | 2024-05-16T14:38:23Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6878 | 2024-05-16T14:38:22Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6878 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,282,068,337 | 6,877 | OSError: [Errno 24] Too many open files | closed | ### Describe the bug
I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb)
When trying to load it using the `load_dataset` function I get... | true | 2024-05-07T01:15:09Z | 2024-06-02T14:22:23Z | 2024-05-13T13:01:55Z | loicmagne | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6877 | false | [
"ulimit -n 8192 can solve this problem",
"> ulimit -n 8192 can solve this problem\r\n\r\nWould there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library",
"> > ulimit -n 8192 can solve this problem\r\n> \r\n> Would there be a systematic w... |
2,281,450,743 | 6,876 | Unpin hfh | closed | Needed to use those in dataset-viewer:
- dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests
- dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer
close https://github.com/hugging... | true | 2024-05-06T18:10:49Z | 2024-05-27T10:20:42Z | 2024-05-27T10:14:40Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6876 | 2024-05-27T10:14:40Z | 12 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6876 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6876). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"transformers 4.40.2 was release yesterday but not sure if it contains the fix",
"@lho... |
2,281,428,826 | 6,875 | Shorten long logs | closed | Some datasets may have unexpectedly long features/types (e.g. if the files are not formatted correctly).
In that case we should still be able to log something readable | true | 2024-05-06T17:57:07Z | 2024-05-07T12:31:46Z | 2024-05-07T12:25:45Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6875 | 2024-05-07T12:25:45Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6875 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6875). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,280,717,233 | 6,874 | Use pandas ujson in JSON loader to improve performance | closed | Use pandas ujson in JSON loader to improve performance.
Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`.
Fix #6867.
CC: @natolambert | true | 2024-05-06T12:01:27Z | 2024-05-17T16:28:29Z | 2024-05-17T16:22:27Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6874 | 2024-05-17T16:22:27Z | 4 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6874 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Before pandas-2.2.0, the function `ujson_loads` was named `loads`: https://github.com/p... |
2,280,463,182 | 6,873 | Set dev version | closed | true | 2024-05-06T09:43:18Z | 2024-05-06T10:03:19Z | 2024-05-06T09:57:12Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6873 | 2024-05-06T09:57:12Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6873 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6873). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,280,438,432 | 6,872 | Release 2.19.1 | closed | true | 2024-05-06T09:29:15Z | 2024-05-06T09:35:33Z | 2024-05-06T09:35:32Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6872 | 2024-05-06T09:35:32Z | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6872 | true | [] | |
2,280,102,869 | 6,871 | Fix download for dict of dicts of URLs | closed | Fix download for a dict of dicts of URLs when batched (default), introduced by:
- #6794
This PR also implements regression tests.
Fix #6869, fix #6850. | true | 2024-05-06T06:06:52Z | 2024-05-06T09:32:03Z | 2024-05-06T09:25:52Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6871 | 2024-05-06T09:25:52Z | 4 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6871 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6871). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Once merged, I think a patch release is needed.",
"Once the CI is green, I am merging... |
2,280,084,008 | 6,870 | Update tqdm >= 4.66.3 to fix vulnerability | closed | Update tqdm >= 4.66.3 to fix vulnerability, | true | 2024-05-06T05:49:36Z | 2024-05-06T06:08:06Z | 2024-05-06T06:02:00Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6870 | 2024-05-06T06:02:00Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6870 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6870). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,280,048,297 | 6,869 | Download is broken for dict of dicts: FileNotFoundError | closed | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-0000... | true | 2024-05-06T05:13:36Z | 2024-05-06T09:25:53Z | 2024-05-06T09:25:53Z | albertvillanova | MEMBER | null | null | 0 | 1 | 1 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6869 | false | [] |
2,279,385,159 | 6,868 | datasets.BuilderConfig does not work. | closed | ### Describe the bug
I custom a BuilderConfig and GeneratorBasedBuilder.
Here is the code for BuilderConfig
```
class UIEConfig(datasets.BuilderConfig):
def __init__(
self,
*args,
data_dir=None,
instruction_file=None,
instruction_strategy=None,... | true | 2024-05-05T08:08:55Z | 2024-05-05T12:15:02Z | 2024-05-05T12:15:01Z | jdm4pku | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6868 | false | [
"I guess the issue is caused by the customization of BuilderConfig that you use from the repo [https://github.com/BeyonderXX/InstructUIE](https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py). You should report to them.\r\n\r\nI see you already opened an issue in their repo:\r\n- https://github.... |
2,279,059,787 | 6,867 | Improve performance of JSON loader | closed | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that... | true | 2024-05-04T15:04:16Z | 2024-05-17T16:22:28Z | 2024-05-17T16:22:28Z | albertvillanova | MEMBER | null | null | 5 | 3 | 3 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6867 | false | [
"Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.",
"Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/... |
2,278,736,221 | 6,866 | DataFilesNotFoundError for datasets in the open-llm-leaderboard | closed | ### Describe the bug
When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started see... | true | 2024-05-04T04:59:00Z | 2024-05-14T08:09:56Z | 2024-05-14T08:09:56Z | jerome-white | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6866 | false | [
"Potentially related:\r\n* #6864\r\n* #6850\r\n* #6848\r\n* #6819",
"Hi @jerome-white, thnaks for reporting.\r\n\r\nHowever, I cannot reproduce your issue:\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n\r\n>>> get_dataset_config_names(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.... |
2,277,304,832 | 6,865 | Example on Semantic segmentation contains bug | open | ### Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59... | true | 2024-05-03T09:40:12Z | 2024-05-03T09:40:12Z | null | ducha-aiki | NONE | null | null | 0 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6865 | false | [] |
2,276,986,981 | 6,864 | Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub | closed | ### Describe the bug
The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub.
### Steps to reproduce the bug
```
from datasets import load_dataset
prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]... | true | 2024-05-03T06:03:30Z | 2024-05-06T06:36:42Z | 2024-05-06T06:36:41Z | vinodrajendran001 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6864 | false | [
"Hi @vinodrajendran001, thanks for reporting.\r\n\r\nIndeed the dataset no longer exists on the Hub. The URL https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts gives 404 Not Found error."
] |
2,276,977,534 | 6,863 | Revert temporary pin huggingface-hub < 0.23.0 | closed | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | true | 2024-05-03T05:53:55Z | 2024-05-27T10:14:41Z | 2024-05-27T10:14:41Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6863 | false | [] |
2,276,763,745 | 6,862 | Fix load_dataset for data_files with protocols other than HF | closed | Fixes huggingface/datasets/issues/6598
I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue.
MRE:
```
pip install "datasets[s3]"
python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': ... | true | 2024-05-03T01:43:47Z | 2024-07-23T14:37:08Z | 2024-07-23T14:30:09Z | matstrand | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6862 | 2024-07-23T14:30:09Z | 2 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6862 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6862). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,275,988,990 | 6,861 | Fix CI by temporarily pinning huggingface-hub < 0.23.0 | closed | As a hotfix for CI, temporarily pin `huggingface-hub` upper version
Fix #6860.
Revert once root cause is fixed, see:
- https://github.com/huggingface/transformers/issues/30618 | true | 2024-05-02T16:40:04Z | 2024-05-02T16:59:42Z | 2024-05-02T16:53:42Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6861 | 2024-05-02T16:53:42Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6861 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6861). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,275,537,137 | 6,860 | CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download" | closed | CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume... | true | 2024-05-02T13:24:17Z | 2024-05-02T16:53:45Z | 2024-05-02T16:53:45Z | albertvillanova | MEMBER | null | null | 3 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6860 | false | [
"I think this needs to be fixed on transformers.\r\n\r\nCC: @Wauplin ",
"See:\r\n- https://github.com/huggingface/transformers/issues/30618",
"Opened https://github.com/huggingface/transformers/pull/30620"
] |
2,274,996,774 | 6,859 | Support folder-based datasets with large metadata.jsonl | open | I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests.
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="data-for-upload")
Traceback (mos... | true | 2024-05-02T09:07:26Z | 2024-05-02T09:07:26Z | null | gbenson | NONE | https://github.com/huggingface/datasets/pull/6859 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6859 | true | [] |
2,274,917,185 | 6,858 | Segmentation fault | closed | ### Describe the bug
Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault.
Several others files are also concerned.
### Steps to reproduce the bug
# Create a new venv
python3 -m venv venv_test
source venv_test/bin/activate
# Install the latest versio... | true | 2024-05-02T08:28:49Z | 2024-05-03T08:43:21Z | 2024-05-03T08:42:36Z | scampion | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6858 | false | [
"I downloaded the jsonl file and extract it manually. \r\nThe issue seems to be related to pyarrow.json \r\n\r\n\r\n\r\npython3 -q -X faulthandler -c \"from datasets import load_dataset; load_dataset('json', data_files='/Users/scampion/Downloads/1998-09.jsonl')\"\r\nGenerating train split: 0 examples [00:00, ? exa... |
2,274,849,730 | 6,857 | Fix line-endings in tests on Windows | closed | EDIT:
~~Fix test_delete_from_hub on Windows by passing explicit encoding.~~
Fix test_delete_from_hub and test_xgetsize_private by uploading the README file content directly (encoding the string), instead of writing a local file and uploading it.
Note that local files created on Windows will have "\r\n" line ending... | true | 2024-05-02T07:49:15Z | 2024-05-02T11:49:35Z | 2024-05-02T11:43:00Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6857 | 2024-05-02T11:43:00Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6857 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6857). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,274,828,933 | 6,856 | CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character | closed | CI fails on Windows for test_delete_from_hub after the merge of:
- #6820
This is weird because the CI was green in the PR branch before merging to main.
```
FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')]
At index 1 ... | true | 2024-05-02T07:37:03Z | 2024-05-02T11:43:01Z | 2024-05-02T11:43:01Z | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6856 | false | [
"After investigation, I have found that when a local file is uploaded to the Hub, the new line character is no longer transformed to \"\\n\": on Windows machine now it is kept as \"\\r\\n\".\r\n\r\nAny idea why this changed?\r\nCC: @lhoestq "
] |
2,274,777,812 | 6,855 | Fix dataset name for community Hub script-datasets | closed | Fix dataset name for community Hub script-datasets by passing explicit dataset_name to HubDatasetModuleFactoryWithScript.
Fix #6854.
CC: @Wauplin | true | 2024-05-02T07:05:44Z | 2024-05-03T15:58:00Z | 2024-05-03T15:51:57Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6855 | 2024-05-03T15:51:57Z | 6 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6855 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI errors were unrelated. I am merging main once they were fixed:\r\n- #6857",
"T... |
2,274,767,686 | 6,854 | Wrong example of usage when config name is missing for community script-datasets | closed | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name i... | true | 2024-05-02T06:59:39Z | 2024-05-03T15:51:59Z | 2024-05-03T15:51:58Z | albertvillanova | MEMBER | null | null | 0 | 1 | 1 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6854 | false | [] |
2,272,570,000 | 6,853 | Support soft links for load_datasets imagefolder | open | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from ... | true | 2024-04-30T22:14:29Z | 2024-04-30T22:14:29Z | null | billytcl | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6853 | false | [] |
2,272,465,011 | 6,852 | Write token isn't working while pushing to datasets | closed | ### Describe the bug
<img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc">
As you can see I logged in to my account and the write token is valid.
But I can't upload on my main account and I am getting that ... | true | 2024-04-30T21:18:20Z | 2024-05-02T00:55:46Z | 2024-05-02T00:55:46Z | realzai | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6852 | false | [] |
2,270,965,503 | 6,851 | load_dataset('emotion') UnicodeDecodeError | open | ### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.... | true | 2024-04-30T09:25:01Z | 2024-09-05T03:11:04Z | null | L-Block-C | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6851 | false | [
"I met the same problem, here is my code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nds_name = \"togethercomputer/RedPajama-Data-1T\"\r\nds = load_dataset(ds_name, download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n```\r\nAnd output error is:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/yator... |
2,269,500,624 | 6,850 | Problem loading voxpopuli dataset | closed | ### Describe the bug
```
Exception has occurred: FileNotFoundError
Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'}
```
Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/da... | true | 2024-04-29T16:46:51Z | 2024-05-06T09:25:54Z | 2024-05-06T09:25:54Z | Namangarg110 | NONE | null | null | 3 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6850 | false | [
"Version 2.18 works without problem.",
"@Namangarg110 @mohsen-goodarzi The bug appears because the number of urls is less than 16 and the algorithm is meant to work on the previously created mode for a single url as stated on line 314: https://github.com/huggingface/datasets/blob/1bf8a46cc7b096d5c547ea3794f6a4b6... |
2,268,718,355 | 6,849 | fix webdataset filename split | closed | use `os.path.splitext` to parse field_name.
fix filename which has dot. like:
```
a.b.jpeg
a.b.txt
``` | true | 2024-04-29T10:57:18Z | 2024-06-04T12:54:04Z | 2024-06-04T12:54:04Z | Bowser1704 | NONE | https://github.com/huggingface/datasets/pull/6849 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6849 | true | [
"Hi ! This was fixed recently in https://github.com/huggingface/datasets/pull/6904 and https://github.com/huggingface/datasets/pull/6931"
] |
2,268,622,609 | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | open | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/pyth... | true | 2024-04-29T10:06:02Z | 2025-04-01T20:48:09Z | null | mheryerznkanyan | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6848 | false | [
"Same issue here.",
"#self-assign",
"Hi @mheryerznkanyan ,\nI tested it on a Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.34 machine using the same package versions you mentioned, and it works fine now.\nDoes it work on your machine as well?"
] |
2,268,589,177 | 6,847 | [Streaming] Only load requested splits without resolving files for the other splits | open | e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split.
This is due to `load_dataset()` resolving the files of all the splits even if only one is needed.
In `dataset-view... | true | 2024-04-29T09:49:32Z | 2024-05-07T04:43:59Z | null | lhoestq | MEMBER | null | null | 2 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6847 | false | [
"This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832",
"I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src... |
2,267,352,120 | 6,846 | Unimaginable super slow iteration | closed | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
n... | true | 2024-04-28T05:24:14Z | 2024-05-06T08:30:03Z | 2024-05-06T08:30:03Z | rangehow | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6846 | false | [
"In every iteration you load the full \"random_input\" column in memory, only then to access it's i-th element.\r\n\r\nYou can try using this instead\r\n\r\na,b=dataset[i]['random_input'],dataset[i]['random_output']"
] |
2,265,876,551 | 6,845 | load_dataset doesn't support list column | open | ### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
... | true | 2024-04-26T14:11:44Z | 2024-05-15T12:06:59Z | null | arthasking123 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6845 | false | [
"I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded a... |
2,265,870,546 | 6,844 | Retry on HF Hub error when streaming | closed | Retry on the `huggingface_hub`'s `HfHubHTTPError` in the streaming mode.
Fix #6843 | true | 2024-04-26T14:09:04Z | 2024-04-26T15:37:42Z | 2024-04-26T15:37:42Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6844 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6844 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6844). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@Wauplin This PR is indeed not needed as explained in https://github.com/huggingface/da... |
2,265,432,897 | 6,843 | IterableDataset raises exception instead of retrying | open | ### Describe the bug
In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Si... | true | 2024-04-26T10:00:43Z | 2024-10-28T14:57:07Z | null | bauwenst | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6843 | false | [
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succ... |
2,264,692,159 | 6,842 | Datasets with files with colon : in filenames cannot be used on Windows | open | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCo... | true | 2024-04-26T00:14:16Z | 2024-04-26T00:14:16Z | null | jacobjennings | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6842 | false | [] |
2,264,687,683 | 6,841 | Unable to load wiki_auto_asset_turk from GEM | closed | ### Describe the bug
I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call
... | true | 2024-04-26T00:08:47Z | 2024-05-29T13:54:03Z | 2024-04-26T16:12:29Z | abhinavsethy | NONE | null | null | 8 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6841 | false | [
"Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`",
"Thanks Mario. Still getting the same issu... |
2,264,604,766 | 6,840 | Delete uploaded files from the UI | open | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | true | 2024-04-25T22:33:57Z | 2025-01-21T09:44:22Z | null | saicharan2804 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6840 | false | [
"This is super late, but if you click on any directory, you can delete the directory using a \"Delete Directory\" button in the top right of the interface, and similarly, if you click on any file, you can delete the file using a \"Delete File\" button in the top right."
] |
2,263,761,062 | 6,839 | Remove token arg from CLI examples | closed | Remove token arg from CLI examples.
Fix #6838.
CC: @Wauplin | true | 2024-04-25T14:36:58Z | 2024-04-26T17:03:51Z | 2024-04-26T16:57:40Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6839 | 2024-04-26T16:57:40Z | 2 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6839 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6839). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,263,674,843 | 6,838 | Remove token arg from CLI examples | closed | As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603
> I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login) | true | 2024-04-25T14:00:38Z | 2024-04-26T16:57:41Z | 2024-04-26T16:57:41Z | albertvillanova | MEMBER | null | null | 0 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6838 | false | [] |
2,263,273,983 | 6,837 | Cannot use cached dataset without Internet connection (or when servers are down) | open | ### Describe the bug
I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues).
The problem why I can't use it:
`data_files` argument from `datasets.load_dataset()` function get it updates from the serve... | true | 2024-04-25T10:48:20Z | 2025-01-25T16:36:41Z | null | DionisMuzenitov | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6837 | false | [
"There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n ... |
2,262,249,919 | 6,836 | ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0 | open | ### Describe the bug
Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us.
Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below.
### Steps to re... | true | 2024-04-24T21:52:35Z | 2024-05-14T04:08:19Z | null | ebsmothers | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6836 | false | [
"Get same error on same datasets too.",
"+1",
"same error"
] |
2,261,079,263 | 6,835 | Support pyarrow LargeListType | closed | Fixes #6834 | true | 2024-04-24T11:34:24Z | 2024-08-12T14:43:47Z | 2024-08-12T14:43:47Z | Modexus | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6835 | null | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6835 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6835). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Fixed the conversion from `pyarrow` to `python` `Sequence` features. \r\n\r\nThere is s... |
2,261,078,104 | 6,834 | largelisttype not supported (.from_polars()) | closed | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_pola... | true | 2024-04-24T11:33:43Z | 2024-08-12T14:43:46Z | 2024-08-12T14:43:46Z | Modexus | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6834 | false | [] |
2,259,731,274 | 6,833 | Super slow iteration with trivial custom transform | open | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"... | true | 2024-04-23T20:40:59Z | 2024-10-08T15:41:18Z | null | xslittlegrass | NONE | null | null | 7 | 3 | 3 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6833 | false | [
"Similar issue in text process \r\n\r\n```python\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(model_dir[args.model])\r\ntrain_dataset=datasets.load_from_disk(dataset_dir[args.dataset],keep_in_memory=True)['train']\r\ntrain_dataset=train_dataset.map(partial(dname2func[args.dataset],tokenizer=tokenizer),batched=Tru... |
2,258,761,447 | 6,832 | Support downloading specific splits in `load_dataset` | open | This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, t... | true | 2024-04-23T12:32:27Z | 2024-08-19T15:19:38Z | null | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6832 | null | 4 | 2 | 2 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6832 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6832). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Friendly ping on this! This feature would be really helpful and useful to me (and likel... |
2,258,537,405 | 6,831 | Add docs about the CLI | closed | Add docs about the CLI.
Close #6830.
CC: @severo | true | 2024-04-23T10:41:03Z | 2024-04-26T16:51:09Z | 2024-04-25T10:44:10Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6831 | 2024-04-25T10:44:10Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6831 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6831). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Concretely, the docs about convert_to_parquet are here: https://moon-ci-docs.huggingfac... |
2,258,433,178 | 6,830 | Add a doc page for the convert_to_parquet CLI | closed | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | true | 2024-04-23T09:49:04Z | 2024-04-25T10:44:11Z | 2024-04-25T10:44:11Z | severo | COLLABORATOR | null | null | 0 | 1 | 0 | 1 | null | false | [
"documentation"
] | https://github.com/huggingface/datasets/issues/6830 | false | [] |
2,258,424,577 | 6,829 | Load and save from/to disk no longer accept pathlib.Path | open | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to... | true | 2024-04-23T09:44:45Z | 2024-04-23T09:44:46Z | null | albertvillanova | MEMBER | null | null | 0 | 1 | 1 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6829 | false | [] |
2,258,420,421 | 6,828 | Support PathLike input in save_to_disk / load_from_disk | open | true | 2024-04-23T09:42:38Z | 2024-04-23T11:05:52Z | null | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6828 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6828 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,254,011,833 | 6,827 | Loading a remote dataset fails in the last release (v2.19.0) | open | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test... | true | 2024-04-19T21:11:58Z | 2024-04-19T21:13:42Z | null | zrthxn | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6827 | false | [] |
2,252,445,242 | 6,826 | Set dev version | closed | true | 2024-04-19T08:51:42Z | 2024-04-19T09:05:25Z | 2024-04-19T08:52:14Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6826 | 2024-04-19T08:52:13Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6826 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6826). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,252,404,599 | 6,825 | Release: 2.19.0 | closed | true | 2024-04-19T08:29:02Z | 2024-05-04T12:23:26Z | 2024-04-19T08:44:57Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6825 | 2024-04-19T08:44:57Z | 2 | 1 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6825 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6825). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,251,076,197 | 6,824 | Winogrande does not seem to be compatible with datasets version of 1.18.0 | closed | ### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line... | true | 2024-04-18T16:11:04Z | 2024-04-19T09:53:15Z | 2024-04-19T09:52:33Z | spliew | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6824 | false | [
"Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```",
"Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!"
] |
2,250,775,569 | 6,823 | Loading problems of Datasets with a single shard | open | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 bu... | true | 2024-04-18T13:59:00Z | 2024-11-25T05:40:09Z | null | andjoer | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6823 | false | [
"Has there been a PR to resolve this already?",
"The problem rises from using a wrong api.\r\nWhen loading a save_to_disk dataset, **load_from_disk** (instead of load_dataset) is what should be used.\r\n\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ndst.save_to_disk(\"cache\")\r\ndst = load_from_disk... |
2,250,316,258 | 6,822 | Fix parquet export infos | closed | Don't use the parquet export infos when USE_PARQUET_EXPORT is False.
Otherwise the `datasets-server` might reuse erroneous data when re-running a job
this follows https://github.com/huggingface/datasets/pull/6714 | true | 2024-04-18T10:21:41Z | 2024-04-18T11:15:41Z | 2024-04-18T11:09:13Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6822 | 2024-04-18T11:09:13Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6822 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6822). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,248,471,673 | 6,820 | Allow deleting a subset/config from a no-script dataset | closed | TODO:
- [x] Add docs
- [x] Delete token arg from CLI example
- See: #6839
Close #6810. | true | 2024-04-17T14:41:12Z | 2024-05-02T07:31:03Z | 2024-04-30T09:44:24Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6820 | 2024-04-30T09:44:24Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6820 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6820). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This is ready for review, @huggingface/datasets.",
"I am adding a test...",
"@lhoes... |
2,248,043,797 | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | open | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (support... | true | 2024-04-17T11:19:47Z | 2024-04-17T11:19:47Z | null | severo | COLLABORATOR | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6819 | false | [] |
2,246,578,480 | 6,817 | Support indexable objects in `Dataset.__getitem__` | closed | As discussed in https://github.com/huggingface/datasets/pull/6816, this is needed to support objects that implement `__index__` such as `np.int64` in `Dataset.__getitem__`. | true | 2024-04-16T17:41:27Z | 2024-04-16T18:27:44Z | 2024-04-16T18:17:29Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6817 | 2024-04-16T18:17:29Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6817 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6817). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,246,264,911 | 6,816 | Improve typing of Dataset.search, matching definition | closed | Previously, the output of `score, indices = Dataset.search(...)` would be numpy arrays.
The definition in `SearchResult` is a `List[int]` so this PR now matched the expected type.
The previous behavior is a bit annoying as `Dataset.__getitem__` doesn't support `numpy.int64` which forced me to convert `indices` to... | true | 2024-04-16T14:53:39Z | 2024-04-16T15:54:10Z | 2024-04-16T15:54:10Z | Dref360 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6816 | null | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6816 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6816). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi! This is a breaking change. A better solution is to check for \"indexable\" types in... |
2,246,197,070 | 6,815 | Remove `os.path.relpath` in `resolve_patterns` | closed | ... to save a few seconds when resolving repos with many data files. | true | 2024-04-16T14:23:13Z | 2024-04-16T16:06:48Z | 2024-04-16T15:58:22Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6815 | 2024-04-16T15:58:22Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6815 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6815). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,245,857,902 | 6,814 | `map` with `num_proc` > 1 leads to OOM | open | ### Describe the bug
When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this?
### Steps to reproduce the bug
```
ds = load_dataset("parquet", data... | true | 2024-04-16T11:56:03Z | 2024-04-19T11:53:41Z | null | bhavitvyamalik | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6814 | false | [
"Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk"
] |
2,245,626,870 | 6,813 | Add Dataset.take and Dataset.skip | closed | ...to be aligned with IterableDataset.take and IterableDataset.skip | true | 2024-04-16T09:53:42Z | 2024-04-16T14:12:14Z | 2024-04-16T14:06:07Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6813 | 2024-04-16T14:06:07Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6813 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6813). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.