id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,383,151,220 | 7,015 | add split argument to Generator | closed | ## Actual
When creating a multi-split dataset using generators like
```python
datasets.DatasetDict({
"val": datasets.Dataset.from_generator(
generator=generator_val,
features=features
),
"test": datasets.Dataset.from_generator(
generator=generator_test,
features=features,
... | true | 2024-07-01T08:09:25Z | 2024-07-26T09:37:51Z | 2024-07-26T09:31:56Z | piercus | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7015 | 2024-07-26T09:31:55Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7015 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7015). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova thanks for the review, please take a look",
"@albertvillanova please... |
2,382,985,847 | 7,014 | Skip faiss tests on Windows to avoid running CI for 360 minutes | closed | Skip faiss tests on Windows to avoid running CI for 360 minutes.
Fix #7013.
Revert once the underlying issue is fixed. | true | 2024-07-01T06:45:35Z | 2024-07-01T07:16:36Z | 2024-07-01T07:10:27Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7014 | 2024-07-01T07:10:27Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7014 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7014). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The failing CI tests are unrelated to this PR.\r\n\r\nWe can see that now the integrati... |
2,382,976,738 | 7,013 | CI is broken for faiss tests on Windows: node down: Not properly terminated | closed | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time o... | true | 2024-07-01T06:40:03Z | 2024-07-01T07:10:28Z | 2024-07-01T07:10:28Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/7013 | false | [] |
2,380,934,047 | 7,012 | Raise an error when a nested object is expected to be a mapping that displays the object | closed | true | 2024-06-28T18:10:59Z | 2024-07-11T02:06:16Z | 2024-07-11T02:06:16Z | sebbyjp | NONE | https://github.com/huggingface/datasets/pull/7012 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7012 | true | [] | |
2,379,785,262 | 7,011 | Re-enable raising error from huggingface-hub FutureWarning in CI | closed | Re-enable raising error from huggingface-hub FutureWarning in tests, once that the fix in transformers
- https://github.com/huggingface/transformers/pull/31007
was just released yesterday in transformers-4.42.0: https://github.com/huggingface/transformers/releases/tag/v4.42.0
Fix #7010. | true | 2024-06-28T07:28:32Z | 2024-06-28T12:25:25Z | 2024-06-28T12:19:28Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7011 | 2024-06-28T12:19:28Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7011 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7011). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,379,777,480 | 7,010 | Re-enable raising error from huggingface-hub FutureWarning in CI | closed | Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR:
- #6876
Note that this can only be done once transformers releases the fix:
- https://github.com/huggingface/transformers/pull/31007 | true | 2024-06-28T07:23:40Z | 2024-06-28T12:19:30Z | 2024-06-28T12:19:29Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/7010 | false | [] |
2,379,619,132 | 7,009 | Support ruff 0.5.0 in CI | closed | Support ruff 0.5.0 in CI and revert:
- #7007
Fix #7008. | true | 2024-06-28T05:37:36Z | 2024-06-28T07:17:26Z | 2024-06-28T07:11:17Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7009 | 2024-06-28T07:11:17Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7009 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,379,591,141 | 7,008 | Support ruff 0.5.0 in CI | closed | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | true | 2024-06-28T05:11:26Z | 2024-06-28T07:11:18Z | 2024-06-28T07:11:18Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/7008 | false | [] |
2,379,588,676 | 7,007 | Fix CI by temporarily pinning ruff < 0.5.0 | closed | As a hotfix for CI, temporarily pin ruff upper version < 0.5.0.
Fix #7006.
Revert once root cause is fixed. | true | 2024-06-28T05:09:17Z | 2024-06-28T05:31:21Z | 2024-06-28T05:25:17Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7007 | 2024-06-28T05:25:17Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7007 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7007). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,379,581,543 | 7,006 | CI is broken after ruff-0.5.0: E721 | closed | After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstanc... | true | 2024-06-28T05:03:28Z | 2024-06-28T05:25:18Z | 2024-06-28T05:25:18Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/7006 | false | [] |
2,378,424,349 | 7,005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | closed | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all ... | true | 2024-06-27T15:08:26Z | 2024-06-28T09:56:19Z | 2024-06-28T09:56:19Z | Aki1991 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7005 | false | [
"Hi ! `data_dir=` is for directories, can you try using `data_files=` instead ?",
"If you are trying to load your image dataset from a local folder, you should replace \"data_dir=path/to/jsonl/metadata.jsonl\" with the real folder path in your computer.\r\n\r\nhttps://huggingface.co/docs/datasets/en/image_load#im... |
2,376,064,264 | 7,004 | Fix WebDatasets KeyError for user-defined Features when a field is missing in an example | closed | Fixes: https://github.com/huggingface/datasets/issues/6900
Not sure if this needs any addition stuff before merging | true | 2024-06-26T18:58:05Z | 2024-06-29T00:15:49Z | 2024-06-28T09:30:12Z | ProGamerGov | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7004 | 2024-06-28T09:30:12Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7004 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7004). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,373,084,132 | 7,003 | minor fix for bfloat16 | closed | true | 2024-06-25T16:10:04Z | 2024-06-25T16:16:11Z | 2024-06-25T16:10:10Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7003 | 2024-06-25T16:10:10Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7003 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,373,010,351 | 7,002 | Fix dump of bfloat16 torch tensor | closed | close https://github.com/huggingface/datasets/issues/7000 | true | 2024-06-25T15:38:09Z | 2024-06-25T16:10:16Z | 2024-06-25T15:51:52Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7002 | 2024-06-25T15:51:52Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7002 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7002). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,372,930,879 | 7,001 | Datasetbuilder Local Download FileNotFoundError | open | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems... | true | 2024-06-25T15:02:34Z | 2024-06-25T15:21:19Z | null | purefall | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7001 | false | [
"Ok it seems the solution is to use the directory string without the trailing \"/\" which in my case as: \r\n\r\n`parquet_dir = \"~/data/Parquet\" `\r\n\r\nStill i think this is a weird behavior... "
] |
2,372,887,585 | 7,000 | IterableDataset: Unsupported ScalarType BFloat16 | closed | ### Describe the bug
`IterableDataset.from_generator` crashes when using BFloat16:
```
File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor
args = (obj.detach().cpu().numpy(),)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType ... | true | 2024-06-25T14:43:26Z | 2024-06-25T16:04:00Z | 2024-06-25T15:51:53Z | stoical07 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7000 | false | [
"@lhoestq Thank you for merging #6607, but unfortunately the issue persists for `IterableDataset` :pensive: ",
"Hi ! I opened https://github.com/huggingface/datasets/pull/7002 to fix this bug",
"Amazing, thank you so much @lhoestq! :pray:"
] |
2,372,124,589 | 6,999 | Remove tasks | closed | Remove tasks, as part of the 3.0 release. | true | 2024-06-25T09:06:16Z | 2024-08-21T09:07:07Z | 2024-08-21T09:01:18Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6999 | 2024-08-21T09:01:18Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6999 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6999). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,371,973,926 | 6,998 | Fix tests using hf-internal-testing/librispeech_asr_dummy | closed | Fix tests using hf-internal-testing/librispeech_asr_dummy once that dataset has been converted to Parquet.
Fix #6997. | true | 2024-06-25T07:59:44Z | 2024-06-25T08:22:38Z | 2024-06-25T08:13:42Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6998 | 2024-06-25T08:13:42Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6998 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6998). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,371,966,127 | 6,997 | CI is broken for tests using hf-internal-testing/librispeech_asr_dummy | closed | CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996
```
FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other']
Right contains one more item: 'othe... | true | 2024-06-25T07:55:44Z | 2024-06-25T08:13:43Z | 2024-06-25T08:13:43Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/6997 | false | [] |
2,371,841,671 | 6,996 | Remove deprecated code | closed | Remove deprecated code, as part of the 3.0 release.
First merge:
- [x] #6983
- [x] #6987
- [x] #6999 | true | 2024-06-25T06:54:40Z | 2024-08-21T09:42:52Z | 2024-08-21T09:35:06Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6996 | 2024-08-21T09:35:06Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6996 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6996). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,370,713,475 | 6,995 | ImportError when importing datasets.load_dataset | closed | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. f... | true | 2024-06-24T17:07:22Z | 2024-11-14T01:42:09Z | 2024-06-25T06:11:37Z | Leo-Lsc | NONE | null | null | 9 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6995 | false | [
"What is the version of your installed `huggingface-hub`:\r\n```python\r\nimport huggingface_hub\r\nprint(huggingface_hub.__version__)\r\n```\r\n\r\nIt seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:\r\n```\r\npip install -U huggingface-hu... |
2,370,491,689 | 6,994 | Fix incorrect rank value in data splitting | closed | Fix #6990. | true | 2024-06-24T15:07:47Z | 2024-06-26T04:37:35Z | 2024-06-25T16:19:17Z | yzhangcs | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6994 | 2024-06-25T16:19:17Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6994 | true | [
"Sure~",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6994). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>... |
2,370,444,104 | 6,993 | less script docs | closed | + mark as legacy in some parts of the docs since we'll not build new features for script datasets | true | 2024-06-24T14:45:28Z | 2024-07-08T13:10:53Z | 2024-06-27T09:31:21Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6993 | 2024-06-27T09:31:21Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6993 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6993). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,367,890,622 | 6,992 | Dataset with streaming doesn't work with proxy | open | ### Describe the bug
I'm currently trying to stream data using dataset since the dataset is too big but it hangs indefinitely without loading the first batch. I use AIMOS which is a supercomputer that uses proxy to connect to the internet. I assume it has to do with the network configurations. I've already set up both... | true | 2024-06-22T16:12:08Z | 2024-06-25T15:43:05Z | null | YHL04 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6992 | false | [
"Hi ! can you try updating `datasets` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U datasets huggingface_hub\r\n```"
] |
2,367,711,094 | 6,991 | Unblock NumPy 2.0 | closed | Fixes https://github.com/huggingface/datasets/issues/6980 | true | 2024-06-22T09:19:53Z | 2024-12-25T17:57:34Z | 2024-07-12T12:04:53Z | NeilGirdhar | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6991 | 2024-07-12T12:04:53Z | 21 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6991 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6991). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@albertvillanova Any chance we could get this in before the next release? Everything d... |
2,366,660,785 | 6,990 | Problematic rank after calling `split_dataset_by_node` twice | closed | ### Describe the bug
I'm trying to split `IterableDataset` by `split_dataset_by_node`.
But when doing split on a already split dataset, the resulting `rank` is greater than `world_size`.
### Steps to reproduce the bug
Here is the minimal code for reproduction:
```py
>>> from datasets import load_dataset
>>... | true | 2024-06-21T14:25:26Z | 2024-06-25T16:19:19Z | 2024-06-25T16:19:19Z | yzhangcs | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6990 | false | [
"ah yes good catch ! feel free to open a PR with your suggested fix"
] |
2,365,556,449 | 6,989 | cache in nfs error | open | ### Describe the bug
- When reading dataset, a cache will be generated to the ~/. cache/huggingface/datasets directory
- When using .map and .filter operations, runtime cache will be generated to the /tmp/hf_datasets-* directory
- The default is to use the path of tempfile.tempdir
- If I modify this path to the N... | true | 2024-06-21T02:09:22Z | 2025-01-29T11:44:04Z | null | simplew2011 | NONE | null | null | 1 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6989 | false | [
"Hey @simplew2011 I am curious if you know of a workaround, or possible implications of letting the code run?"
] |
2,364,129,918 | 6,988 | [`feat`] Move dataset card creation to method for easier overriding | open | Hello!
## Pull Request overview
* Move dataset card creation to method for easier overriding
## Details
It's common for me to fully automatically download, reformat, and upload a dataset (e.g. see https://huggingface.co/datasets?other=sentence-transformers), but one aspect that I cannot easily automate is the d... | true | 2024-06-20T10:47:57Z | 2024-06-21T16:04:58Z | null | tomaarsen | MEMBER | https://github.com/huggingface/datasets/pull/6988 | null | 6 | 1 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6988 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6988). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"`Dataset` objects are not made to be subclassed, so I don't think going in that directi... |
2,363,728,190 | 6,987 | Remove beam | closed | Remove beam, as part of the 3.0 release. | true | 2024-06-20T07:27:14Z | 2024-06-26T19:41:55Z | 2024-06-26T19:35:42Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6987 | 2024-06-26T19:35:42Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6987 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6987). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,362,584,179 | 6,986 | Add large_list type support in string_to_arrow | closed | add large_list type support in string_to_arrow() and _arrow_to_datasets_dtype() in features.py
Fix #6984 | true | 2024-06-19T14:54:25Z | 2024-08-12T14:43:48Z | 2024-08-12T14:43:47Z | arthasking123 | NONE | https://github.com/huggingface/datasets/pull/6986 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6986 | true | [
"@albertvillanova @KennethEnevoldsen"
] |
2,362,378,276 | 6,985 | AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' | closed | ### Describe the bug
I have been struggling with this for two days, any help would be appreciated. Python 3.10
```
from setfit import SetFitModel
from huggingface_hub import login
access_token_read = "cccxxxccc"
# Authenticate with the Hugging Face Hub
login(token=access_token_read)
# Load the models fr... | true | 2024-06-19T13:22:28Z | 2025-03-14T18:47:53Z | 2024-06-25T05:40:51Z | firmai | NONE | null | null | 14 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6985 | false | [
"Please note that the error is raised just at import:\r\n```python\r\nimport pyarrow.parquet as pq\r\n```\r\n\r\nTherefore it must be caused by some problem with your pyarrow installation. I would recommend you uninstall and install pyarrow again.\r\n\r\nI also see that it seems you use conda to install pyarrow. Pl... |
2,362,143,554 | 6,984 | Convert polars DataFrame back to datasets | closed | ### Feature request
This returns error.
```python
from datasets import Dataset
dsdf = Dataset.from_dict({"x": [[1, 2], [3, 4, 5]], "y": ["a", "b"]})
Dataset.from_polars(dsdf.to_polars())
```
ValueError: Arrow type large_list<item: int64> does not have a datasets dtype equivalent.
### Motivation
When datasets... | true | 2024-06-19T11:38:48Z | 2024-08-12T14:43:46Z | 2024-08-12T14:43:46Z | ljw20180420 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6984 | false | [
"Hi ! Thanks for reporting :)\r\n\r\nWe don't support `large_list` yet, though it should be added to `Sequence` IMO (maybe with a parameter `large=True` ?)"
] |
2,361,806,201 | 6,983 | Remove metrics | closed | Remove all metrics, as part of the 3.0 release.
Note they are deprecated since 2.5.0 version. | true | 2024-06-19T09:08:55Z | 2024-06-28T06:57:38Z | 2024-06-28T06:51:30Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6983 | 2024-06-28T06:51:30Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6983 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6983). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,361,661,469 | 6,982 | cannot split dataset when using load_dataset | closed | ### Describe the bug
when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document,
This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for da... | true | 2024-06-19T08:07:16Z | 2024-07-08T06:20:16Z | 2024-07-08T06:20:16Z | cybest0608 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6982 | false | [
"it seems the bug will happened in all windows system, I tried it in windows8.1, 10, 11 and all of them failed. But it won't happened in the Linux(Ubuntu and Centos7) and Mac (both my virtual and physical machine). I still don't know what the problem is. May be related to the path? I cannot run the split file in m... |
2,361,520,022 | 6,981 | Update docs on trust_remote_code defaults to False | closed | Update docs on trust_remote_code defaults to False.
The docs needed to be updated due to this PR:
- #6954 | true | 2024-06-19T07:12:21Z | 2024-06-19T14:32:59Z | 2024-06-19T14:26:37Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6981 | 2024-06-19T14:26:37Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6981 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6981). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,360,909,930 | 6,980 | Support NumPy 2.0 | closed | ### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was ... | true | 2024-06-18T23:30:22Z | 2024-07-12T12:04:54Z | 2024-07-12T12:04:53Z | NeilGirdhar | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6980 | false | [] |
2,360,175,363 | 6,979 | How can I load partial parquet files only? | closed | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if the... | true | 2024-06-18T15:44:16Z | 2024-06-21T17:09:32Z | 2024-06-21T13:32:50Z | lucasjinreal | NONE | null | null | 12 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6979 | false | [
"Hello,\r\n\r\nHave you tried loading the dataset in streaming mode? [Documentation](https://huggingface.co/docs/datasets/v2.20.0/stream)\r\n\r\nThis way you wouldn't have to load it all. Also, let's be nice to Parquet, it's a really nice technology and we don't need to be mean :)",
"I have downloaded part of it,... |
2,359,511,469 | 6,978 | Fix regression for pandas < 2.0.0 in JSON loader | closed | A regression was introduced for pandas < 2.0.0 in PR:
- #6914
As described in pandas docs, the `dtype_backend` parameter was first added in pandas 2.0.0: https://pandas.pydata.org/docs/reference/api/pandas.read_json.html
This PR fixes the regression by passing (or not) the `dtype_backend` parameter depending on ... | true | 2024-06-18T10:26:34Z | 2024-06-19T06:23:24Z | 2024-06-19T05:50:18Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6978 | 2024-06-19T05:50:18Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6978 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6978). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,359,295,045 | 6,977 | load json file error with v2.20.0 | closed | ### Describe the bug
```
load_dataset(path="json", data_files="./test.json")
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/json/json.py", line 132, in _generate_tables
pa_table = p... | true | 2024-06-18T08:41:01Z | 2024-06-18T10:06:10Z | 2024-06-18T10:06:09Z | xiaoyaolangzhi | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6977 | false | [
"Thanks for reporting, @xiaoyaolangzhi.\r\n\r\nIndeed, we are currently requiring `pandas` >= 2.0.0.\r\n\r\nYou will need to update pandas in your local environment:\r\n```\r\npip install -U pandas\r\n``` ",
"Thank you very much."
] |
2,357,107,203 | 6,976 | Ensure compatibility with numpy 2.0.0 | closed | Following the conversion guide, copy=False is no longer required and will result in an error: https://numpy.org/devdocs/numpy_2_0_migration_guide.html#adapting-to-changes-in-the-copy-keyword.
The following fix should resolve the issue.
error found during testing on the MTEB repository e.g. [here](https://github.c... | true | 2024-06-17T11:29:22Z | 2024-06-19T14:30:32Z | 2024-06-19T14:04:34Z | KennethEnevoldsen | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6976 | 2024-06-19T14:04:34Z | 2 | 2 | 2 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6976 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6976). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,357,003,959 | 6,975 | Set temporary numpy upper version < 2.0.0 to fix CI | closed | Set temporary numpy upper version < 2.0.0 to fix CI. See: https://github.com/huggingface/datasets/actions/runs/9546031216/job/26308072017
```
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.... | true | 2024-06-17T10:36:54Z | 2024-06-17T12:49:53Z | 2024-06-17T12:43:56Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6975 | 2024-06-17T12:43:56Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6975 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6975). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,355,517,362 | 6,973 | IndexError during training with Squad dataset and T5-small model | closed | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libr... | true | 2024-06-16T07:53:54Z | 2024-07-01T11:25:40Z | 2024-07-01T11:25:40Z | ramtunguturi36 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6973 | false | [
"add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704",
"Closing this issue because it was a reported and fixed in transformers."
] |
2,353,531,912 | 6,972 | Fix webdataset pickling | closed | ...by making tracked iterables picklable.
This is important to make streaming datasets compatible with multiprocessing e.g. for parallel data loading | true | 2024-06-14T14:43:02Z | 2024-06-14T15:43:43Z | 2024-06-14T15:37:35Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6972 | 2024-06-14T15:37:35Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6972 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6972). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,351,830,856 | 6,971 | packaging: Remove useless dependencies | closed | Revert changes in #6396 and #6404. CVE-2023-47248 has been fixed since PyArrow v14.0.1. Meanwhile Python requirements requires `pyarrow>=15.0.0`. | true | 2024-06-13T18:43:43Z | 2024-06-14T14:03:34Z | 2024-06-14T13:57:24Z | daskol | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6971 | 2024-06-14T13:57:24Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6971 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6971). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@HuggingFaceDocBuilderDev There is no doc for this change. Call a human.",
"Haha it w... |
2,351,380,029 | 6,970 | Set dev version | closed | true | 2024-06-13T14:59:45Z | 2024-06-13T15:06:18Z | 2024-06-13T14:59:56Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6970 | 2024-06-13T14:59:56Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6970 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6970). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,351,351,436 | 6,969 | Release: 2.20.0 | closed | true | 2024-06-13T14:48:20Z | 2024-06-13T15:04:39Z | 2024-06-13T14:55:53Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6969 | 2024-06-13T14:55:53Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6969 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6969). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,351,331,417 | 6,968 | Use `HF_HUB_OFFLINE` instead of `HF_DATASETS_OFFLINE` | closed | To use `datasets` offline, one can use the `HF_DATASETS_OFFLINE` environment variable. This PR makes `HF_HUB_OFFLINE` the recommended environment variable for offline training. Goal is to be more consistent with the rest of HF ecosystem and have a single config value to set.
The changes are backward-compatible meani... | true | 2024-06-13T14:39:40Z | 2024-06-13T17:31:37Z | 2024-06-13T17:25:37Z | Wauplin | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6968 | 2024-06-13T17:25:37Z | 3 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6968 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6968). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Oops, sorry for the style issue. Fixed in https://github.com/huggingface/datasets/pull/... |
2,349,146,398 | 6,967 | Method to load Laion400m | open | ### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings... | true | 2024-06-12T16:04:04Z | 2024-06-12T16:04:04Z | null | humanely | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6967 | false | [] |
2,348,934,466 | 6,966 | Remove underlines between badges | closed | ## Before:
<img width="935" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/93666e72-059b-4180-9e1d-ff176a3d9dac">
## After:
<img width="956" alt="image" src="https://github.com/huggingface/datasets/assets/35881688/75df7c3e-f473-44f0-a872-eeecf6a85fe2"> | true | 2024-06-12T14:32:11Z | 2024-06-19T14:16:21Z | 2024-06-19T14:10:11Z | andrewhong04 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6966 | 2024-06-19T14:10:11Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6966 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
2,348,653,895 | 6,965 | Improve skip take shuffling and distributed | closed | set the right behavior of skip/take depending on whether it's called after or before shuffle/split_by_node | true | 2024-06-12T12:30:27Z | 2024-06-24T15:22:21Z | 2024-06-24T15:16:16Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6965 | 2024-06-24T15:16:16Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6965 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6965). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,344,973,229 | 6,964 | Fix resuming arrow format | closed | following https://github.com/huggingface/datasets/pull/6658 | true | 2024-06-10T22:40:33Z | 2024-06-14T15:04:49Z | 2024-06-14T14:58:37Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6964 | 2024-06-14T14:58:37Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6964 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6964). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,344,269,477 | 6,963 | [Streaming] retry on requests errors | closed | reported in https://discuss.huggingface.co/t/speeding-up-streaming-of-large-datasets-fineweb/90714/6 when training using a streaming a dataloader
cc @Wauplin it looks like the retries from `hfh` are not always enough. In this PR I let `datasets` do additional retries (that users can configure in `datasets.config`) ... | true | 2024-06-10T15:51:56Z | 2024-06-28T09:53:11Z | 2024-06-28T09:46:52Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6963 | 2024-06-28T09:46:52Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6963 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"ci failures are r-unrelated to this PR, merging",
"<details>\n<summary>Show benchmark... |
2,343,394,378 | 6,962 | fix(ci): remove unnecessary permissions | closed | ### What does this PR do?
Remove unnecessary permissions granted to the actions workflow.
Sorry for the mishap. | true | 2024-06-10T09:28:02Z | 2024-06-11T08:31:52Z | 2024-06-11T08:25:47Z | McPatate | MEMBER | https://github.com/huggingface/datasets/pull/6962 | 2024-06-11T08:25:47Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6962 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6962). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,342,022,418 | 6,961 | Manual downloads should count as downloads | open | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | true | 2024-06-09T04:52:06Z | 2024-06-13T16:05:00Z | null | umarbutler | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6961 | false | [
"We're unlikely to add more features/support for datasets with python loading scripts, which include datasets with manual download. Sorry for the inconvenience"
] |
2,340,791,685 | 6,960 | feat(ci): add trufflehog secrets detection | closed | ### What does this PR do?
Adding a GH action to scan for leaked secrets on each commit. | true | 2024-06-07T16:18:23Z | 2024-06-08T14:58:27Z | 2024-06-08T14:52:18Z | McPatate | MEMBER | https://github.com/huggingface/datasets/pull/6960 | 2024-06-08T14:52:18Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6960 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6960). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Yes!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\... |
2,340,229,908 | 6,959 | Better error handling in `dataset_module_factory` | closed | cc @cakiki who reported it on [slack](https://huggingface.slack.com/archives/C039P47V1L5/p1717754405578539) (private link)
This PR updates how errors are handled in `dataset_module_factory` when the `dataset_info` cannot be accessed:
1. Use multiple `except ... as e` instead of using `isinstance(e, ...)`
2. Alway... | true | 2024-06-07T11:24:15Z | 2024-06-10T07:33:53Z | 2024-06-10T07:27:43Z | Wauplin | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6959 | 2024-06-10T07:27:43Z | 3 | 2 | 0 | 2 | false | false | [] | https://github.com/huggingface/datasets/pull/6959 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6959). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Test should be fixed by https://github.com/huggingface/datasets/pull/6959/commits/ef8f7... |
2,337,476,383 | 6,958 | My Private Dataset doesn't exist on the Hub or cannot be accessed | closed | ### Describe the bug
```
File "/root/miniconda3/envs/gino_conda/lib/python3.9/site-packages/datasets/load.py", line 1852, in dataset_module_factory
raise DatasetNotFoundError(msg + f" at revision '{revision}'" if revision else msg)
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on t... | true | 2024-06-06T06:52:19Z | 2024-07-01T11:27:46Z | 2024-07-01T11:27:46Z | wangguan1995 | NONE | null | null | 8 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6958 | false | [
"I can load public dataset, but for my private dataset it fails",
"https://huggingface.co/docs/datasets/upload_dataset",
"I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.\r\n\r\n. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,333,940,021 | 6,956 | update docs on N-dim arrays | closed | true | 2024-06-04T16:32:19Z | 2024-06-04T16:46:34Z | 2024-06-04T16:40:27Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6956 | 2024-06-04T16:40:27Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6956 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,333,802,815 | 6,955 | Fix small typo | closed | true | 2024-06-04T15:19:02Z | 2024-06-05T10:18:56Z | 2024-06-04T15:20:55Z | marcenacp | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6955 | 2024-06-04T15:20:55Z | 1 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6955 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | |
2,333,530,558 | 6,954 | Remove default `trust_remote_code=True` | closed | TODO:
- [x] fix tests | true | 2024-06-04T13:22:56Z | 2024-06-17T16:32:24Z | 2024-06-07T12:20:29Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6954 | 2024-06-07T12:20:29Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6954 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6954). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"yay! 🎉 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<detai... |
2,333,366,120 | 6,953 | Remove canonical datasets from docs | closed | Remove canonical datasets from docs, now that we no longer have canonical datasets. | true | 2024-06-04T12:09:03Z | 2024-07-01T11:31:25Z | 2024-07-01T11:31:25Z | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"documentation"
] | https://github.com/huggingface/datasets/issues/6953 | false | [
"Canonical datasets are no longer mentioned in the docs."
] |
2,333,320,411 | 6,952 | Move info_utils errors to exceptions module | closed | Move `info_utils` errors to `exceptions` module.
Additionally rename some of them, deprecate the former ones, and make the deprecation backward compatible (by making the new errors inherit from the former ones). | true | 2024-06-04T11:48:32Z | 2024-06-10T14:09:59Z | 2024-06-10T14:03:55Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6952 | 2024-06-10T14:03:55Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6952 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6952). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,333,231,042 | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | closed | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | true | 2024-06-04T11:02:33Z | 2024-11-26T08:32:18Z | 2024-07-01T11:33:10Z | windmaple | NONE | null | null | 5 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6951 | false | [
"@xianbaoqian ",
"Feel free to open a PR in `m-a-p/COIG-CQIA` to define a default subset. Currently there is no default.\r\n\r\nYou can find some documentation at https://huggingface.co/docs/hub/datasets-manual-configuration#multiple-configurations",
"@lhoestq \r\n\r\nWhilst having a default subset readily avai... |
2,333,005,974 | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | closed | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | true | 2024-06-04T09:18:32Z | 2024-06-25T08:05:49Z | 2024-06-25T08:05:49Z | iansheng | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"documentation"
] | https://github.com/huggingface/datasets/issues/6950 | false | [
"Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956",
"Fixed."
] |
2,332,336,573 | 6,949 | load_dataset error | closed | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | true | 2024-06-04T01:24:45Z | 2024-07-01T11:33:46Z | 2024-07-01T11:33:46Z | frederichen01 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6949 | false | [
"Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ",
"> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests ... |
2,331,758,300 | 6,948 | to_tf_dataset: Visible devices cannot be modified after being initialized | open | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _b... | true | 2024-06-03T18:10:57Z | 2024-06-03T18:10:57Z | null | logasja | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6948 | false | [] |
2,331,114,055 | 6,947 | FileNotFoundError:error when loading C4 dataset | closed | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | true | 2024-06-03T13:06:33Z | 2024-06-25T06:21:28Z | 2024-06-25T06:21:28Z | W-215 | NONE | null | null | 15 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6947 | false | [
"same problem here",
"Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai... |
2,330,276,848 | 6,946 | Re-enable import sorting disabled by flake8:noqa directive when using ruff linter | closed | Re-enable import sorting that was wrongly disabled by `flake8: noqa` directive after switching to `ruff` linter in datasets-2.10.0 PR:
- #5519
Note that after the linter switch, we wrongly replaced `flake8: noqa` with `ruff: noqa` in datasets-2.17.0 PR:
- #6619
That replacement was wrong because we kept the `is... | true | 2024-06-03T06:24:47Z | 2024-06-04T10:00:08Z | 2024-06-04T09:54:23Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6946 | 2024-06-04T09:54:23Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6946 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6946). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,330,224,869 | 6,945 | Update yanked version of minimum requests requirement | closed | Update yanked version of minimum requests requirement.
Version 2.32.1 was yanked: https://pypi.org/project/requests/2.32.1/ | true | 2024-06-03T05:45:50Z | 2024-06-18T07:36:15Z | 2024-06-03T06:09:43Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6945 | 2024-06-03T06:09:43Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6945 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,330,207,120 | 6,944 | Set dev version | closed | true | 2024-06-03T05:29:59Z | 2024-06-03T05:37:51Z | 2024-06-03T05:31:47Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6944 | 2024-06-03T05:31:46Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6944 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,330,176,890 | 6,943 | Release 2.19.2 | closed | true | 2024-06-03T05:01:50Z | 2024-06-03T05:17:41Z | 2024-06-03T05:17:40Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6943 | 2024-06-03T05:17:40Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6943 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,329,562,382 | 6,942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | closed | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | true | 2024-06-02T09:43:34Z | 2024-06-04T09:54:24Z | 2024-06-04T09:54:24Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/6942 | false | [] |
2,328,930,165 | 6,941 | Supporting FFCV: Fast Forward Computer Vision | open | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | true | 2024-06-01T05:34:52Z | 2024-06-01T05:34:52Z | null | Luciennnnnnn | NONE | null | null | 0 | 1 | 1 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6941 | false | [] |
2,328,637,831 | 6,940 | Enable Sharding to Equal Sized Shards | open | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | true | 2024-05-31T21:55:50Z | 2024-06-01T07:34:12Z | null | yuvalkirstain | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6940 | false | [] |
2,328,059,386 | 6,939 | ExpectedMoreSplits error when using data_dir | closed | As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`:
```python
from datasets import load_dataset
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
```
Traceback (most recent call last):
F... | true | 2024-05-31T15:08:42Z | 2024-05-31T17:10:39Z | 2024-05-31T17:10:39Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6939 | false | [] |
2,327,568,281 | 6,938 | Fix expected splits when passing data_files or dir | closed | reported on slack:
The following code snippet gives an error with v2.19 but not with v2.18:
from datasets import load_dataset
```
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
and the error is:
```
Traceback (most recent ... | true | 2024-05-31T11:04:22Z | 2024-05-31T15:28:03Z | 2024-05-31T15:28:02Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6938 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6938 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"fix is included in https://github.com/huggingface/datasets/pull/6925"
] |
2,327,212,611 | 6,937 | JSON loader implicitly coerces floats to integers | open | The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===========================... | true | 2024-05-31T08:09:12Z | 2025-06-24T05:49:20Z | null | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6937 | false | [
"Hi @albertvillanova, I'd like to work on this issue if it's still open!\n\nFrom what I see, the float-to-int coercion happens during JSON parsing, possibly due to recent `pandas` behavior. I'll investigate the loading logic inside `json.py` and ensure float values like `[0.0, 1.0, 2.0]` retain their type throughou... |
2,326,119,853 | 6,936 | save_to_disk() freezes when saving on s3 bucket with multiprocessing | open | ### Describe the bug
I'm trying to save a `Dataset` using the `save_to_disk()` function with:
- `num_proc > 1`
- `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/"
The hf progress bar shows up but the saving does not seem to start.
When using one processor only (`num_proc=1`), e... | true | 2024-05-30T16:48:39Z | 2025-02-06T22:12:52Z | null | ycattan | NONE | null | null | 3 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6936 | false | [
"I got the same issue. Any updates so far for this issue?",
"Same here. Any updates?",
"+1, experiencing this as well"
] |
2,325,612,022 | 6,935 | Support for pathlib.Path in datasets 2.19.0 | open | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets impor... | true | 2024-05-30T12:53:36Z | 2025-01-14T11:50:22Z | null | lamyiowce | NONE | null | null | 2 | 6 | 6 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6935 | false | [
"+1 I just noticed this when I tried to update `datasets` today.",
"The same issue, I also get error."
] |
2,325,341,717 | 6,934 | Revert ci user | closed | true | 2024-05-30T10:45:26Z | 2024-05-31T10:25:08Z | 2024-05-30T10:45:37Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6934 | 2024-05-30T10:45:37Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6934 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6934). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,325,300,800 | 6,933 | update ci user | closed | token is ok to be public since it's only for the hub-ci | true | 2024-05-30T10:23:02Z | 2024-05-30T10:30:54Z | 2024-05-30T10:23:12Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6933 | 2024-05-30T10:23:12Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6933 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,324,729,267 | 6,932 | Update dataset_dict.py | closed | shape returns (number of rows, number of columns) | true | 2024-05-30T05:22:35Z | 2024-06-04T12:56:20Z | 2024-06-04T12:50:13Z | Arunprakash-A | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6932 | 2024-06-04T12:50:13Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6932 | true | [
"thanks !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_bat... |
2,323,457,525 | 6,931 | [WebDataset] Support compressed files | closed | true | 2024-05-29T14:19:06Z | 2024-05-29T16:33:18Z | 2024-05-29T16:24:21Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6931 | 2024-05-29T16:24:21Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6931 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6931). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... | |
2,323,225,922 | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | open | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | true | 2024-05-29T12:40:05Z | 2024-07-23T06:25:24Z | null | Polarisamoon | NONE | null | null | 2 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6930 | false | [
"How do you solve it ?\r\n",
"> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n"
] |
2,322,980,077 | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | open | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | true | 2024-05-29T10:36:06Z | 2024-05-29T20:51:56Z | null | zinc75 | NONE | null | null | 2 | 1 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6929 | false | [
"you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757",
"@severo : great !"
] |
2,322,267,727 | 6,928 | Update process.mdx: Code Listings Fixes | closed | true | 2024-05-29T03:17:07Z | 2024-06-04T13:08:19Z | 2024-06-04T12:55:00Z | FadyMorris | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6928 | 2024-06-04T12:55:00Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6928 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | |
2,322,260,725 | 6,927 | Update process.mdx: Minor Code Listings Updates and Fixes | closed | true | 2024-05-29T03:09:01Z | 2024-05-29T03:12:46Z | 2024-05-29T03:12:46Z | FadyMorris | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6927 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6927 | true | [] | |
2,322,164,287 | 6,926 | Update process.mdx: Fix code listing in Shard section | closed | true | 2024-05-29T01:25:55Z | 2024-05-29T03:11:20Z | 2024-05-29T03:11:08Z | FadyMorris | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6926 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6926 | true | [] | |
2,321,084,967 | 6,925 | Fix NonMatchingSplitsSizesError/ExpectedMoreSplits when passing data_dir/data_files in no-code Hub datasets | closed | Fix `NonMatchingSplitsSizesError` or `ExpectedMoreSplits` error for no-code Hub datasets if the user passes:
- `data_dir`
- `data_files`
The proposed solution is to avoid using exported dataset info (from Parquet exports) in these cases.
Additionally, also if the user passes `revision` other than "main" (so that ... | true | 2024-05-28T13:33:38Z | 2024-11-07T20:41:58Z | 2024-05-31T17:10:37Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6925 | 2024-05-31T17:10:37Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6925 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6925). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Do you think this is worth making a patch release for?\r\nCC: @huggingface/datasets",
... |
2,320,531,015 | 6,924 | Caching map result of DatasetDict. | open | Hi!
I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins.
Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior?
here it says, that cached files are loaded sequentially:
https://github.com/... | true | 2024-05-28T09:07:41Z | 2024-05-28T09:07:41Z | null | MostHumble | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6924 | false | [] |
2,319,292,872 | 6,923 | Export Parquet Tablet Audio-Set is null bytes in Arrow | open | ### Describe the bug
Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"}
At the same time, the same dataset uploaded to the hub has bit arrays
. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,318,394,398 | 6,921 | Support fsspec 2024.5.0 | closed | Support fsspec 2024.5.0. | true | 2024-05-27T07:00:59Z | 2024-05-27T08:07:16Z | 2024-05-27T08:01:08Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6921 | 2024-05-27T08:01:08Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6921 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6921). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,317,648,021 | 6,920 | [WebDataset] Add `.pth` support for torch tensors | closed | In this PR I add support for `.pth` but with `weights_only=True` to disallow the use of pickle | true | 2024-05-26T11:12:07Z | 2024-05-27T09:11:17Z | 2024-05-27T09:04:54Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6920 | 2024-05-27T09:04:54Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6920 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6920). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,315,618,993 | 6,919 | Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python/tuple> | open | ### Describe the bug
I wrote a notebook to load an existing dataset, process it, and upload as a private dataset using `dataset.push_to_hub(...)` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](... | true | 2024-05-24T14:59:45Z | 2024-05-24T14:59:45Z | null | juanqui | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6919 | false | [] |
2,315,322,738 | 6,918 | NonMatchingSplitsSizesError when using data_dir | closed | ### Describe the bug
Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset.
This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on t... | true | 2024-05-24T12:43:39Z | 2024-05-31T17:10:38Z | 2024-05-31T17:10:38Z | srehaag | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6918 | false | [
"Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.",
"I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714"
] |
2,314,683,663 | 6,917 | WinError 32 The process cannot access the file during load_dataset | open | ### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "tran... | true | 2024-05-24T07:54:51Z | 2024-05-24T07:54:51Z | null | elwe-2808 | NONE | null | null | 0 | 2 | 2 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6917 | false | [] |
2,311,675,564 | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | closed | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | true | 2024-05-22T23:52:15Z | 2024-05-23T00:07:53Z | 2024-05-23T00:07:53Z | jetlime | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6916 | false | [] |
2,310,564,961 | 6,915 | Validate config name and data_files in packaged modules | closed | Validate the config attributes `name` and `data_files` in packaged modules by making the derived classes call their parent `__post_init__` method.
Note that their parent `BuilderConfig` validates its attributes `name` and `data_files` in its `__post_init__` method: https://github.com/huggingface/datasets/blob/60d21e... | true | 2024-05-22T13:36:33Z | 2024-06-06T09:32:10Z | 2024-06-06T09:24:35Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6915 | 2024-06-06T09:24:35Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6915 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6915). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I pushed a change that fixes 2.15 cache reloading (I fixed the packaged module hash), f... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.