id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,871,582,175 | 6,190 | `Invalid user token` even when correct user token is passed! | closed | ### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps t... | true | 2023-08-29T12:37:03Z | 2023-08-29T13:01:10Z | 2023-08-29T13:01:09Z | Vaibhavs10 | MEMBER | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6190 | false | [
"This is because `download_config.use_auth_token` is deprecated - you should use `download_config.token` instead",
"Works! Thanks for the quick fix! <3"
] |
1,871,569,855 | 6,189 | Don't alter input in Features.from_dict | closed | true | 2023-08-29T12:29:47Z | 2023-08-29T13:04:59Z | 2023-08-29T12:52:48Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6189 | 2023-08-29T12:52:48Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6189 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,870,987,640 | 6,188 | [Feature Request] Check the length of batch before writing so that empty batch is allowed | closed | ### Use Case
I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown:
```
ValueError: Schema and number of arrays unequal
`... | true | 2023-08-29T06:37:34Z | 2023-09-19T21:55:38Z | 2023-09-19T21:55:37Z | namespace-Pt | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6188 | false | [
"I think this error means you filter all examples within an (input) batch by deleting its columns. In that case, to avoid the error, you can set the column value to an empty list (`input_batch[\"col\"] = []`) instead."
] |
1,870,936,143 | 6,187 | Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory | open | ### Describe the bug
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_... | true | 2023-08-29T05:49:56Z | 2023-08-29T16:21:45Z | null | andysingal | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6187 | false | [
"Hi! You can load this dataset with:\r\n```python\r\ndata_files = {\r\n \"train\": \"/content/PUBHEALTH/train.tsv\",\r\n \"validation\": \"/content/PUBHEALTH/dev.tsv\",\r\n \"test\": \"/content/PUBHEALTH/test.tsv\",\r\n}\r\n\r\ntsv_datasets_reloaded = load_dataset(\"csv\", data_files=data_files, sep=\"\\t\... |
1,869,431,457 | 6,186 | Feature request: add code example of multi-GPU processing | closed | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f... | true | 2023-08-28T10:00:59Z | 2024-10-07T09:39:51Z | 2023-11-22T15:42:20Z | NielsRogge | CONTRIBUTOR | null | null | 18 | 0 | 0 | 0 | null | false | [
"documentation",
"enhancement"
] | https://github.com/huggingface/datasets/issues/6186 | false | [
"That'd be a great idea! @mariosasko or @lhoestq, would it be possible to fix the code snippet or do you have another suggested way for doing this?",
"Indeed `if __name__ == \"__main__\"` is important in this case.\r\n\r\nNot sure about the imbalanced GPU usage though, but maybe you can try using the `torch.cuda.... |
1,868,077,748 | 6,185 | Error in saving the PIL image into *.arrow files using datasets.arrow_writer | open | ### Describe the bug
I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects.
I am saving the json using the following script:
```
def save_to_arrow(path,temp):
with ArrowWri... | true | 2023-08-26T12:15:57Z | 2023-08-29T14:49:58Z | null | HaozheZhao | NONE | null | null | 1 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6185 | false | [
"You can cast the `input_image` column to the `Image` type to fix the issue:\r\n```python\r\nds.cast_column(\"input_image\", datasets.Image())\r\n```"
] |
1,867,766,143 | 6,184 | Map cache does not detect function changes in another module | closed | ```python
# dataset.py
import os
import datasets
if not os.path.exists('/tmp/test.json'):
with open('/tmp/test.json', 'w') as file:
file.write('[{"text": "hello"}]')
def transform(example):
text = example['text']
# text += ' world'
return {'text': text}
data = datasets.load_dataset('json', ... | true | 2023-08-25T22:59:14Z | 2023-08-29T20:57:07Z | 2023-08-29T20:56:49Z | jonathanasdf | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"duplicate"
] | https://github.com/huggingface/datasets/issues/6184 | false | [
"This issue is a duplicate of https://github.com/huggingface/datasets/issues/3297. This is a limitation of `dill`, a package we use for caching (non-`__main__` module objects are serialized by reference). You can find more info about it here: https://github.com/uqfoundation/dill/issues/424.\r\n\r\nIn your case, mo... |
1,867,743,276 | 6,183 | Load dataset with non-existent file | closed | ### Describe the bug
When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" -
```SchemaInferenceError: Please pass `features` or at least one example when writing data```
### Steps to reproduce the bug
... | true | 2023-08-25T22:21:22Z | 2023-08-29T13:26:22Z | 2023-08-29T13:26:22Z | freQuensy23-coder | NONE | null | null | 2 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6183 | false | [
"Same problem",
"This was fixed in https://github.com/huggingface/datasets/pull/6155, which will be included in the next release (or you can install `datasets` from source to use it immediately)."
] |
1,867,203,131 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | closed | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from d... | true | 2023-08-25T14:54:06Z | 2023-09-04T16:41:11Z | 2023-08-31T14:38:23Z | dsashulya | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6182 | false | [
"Our minimal Python version requirement is 3.8, so we dropped `importlib_metadata`. \r\n\r\nFeel free to open a PR in the `evaluate` repo to replace the problematic import with\r\n```python\r\nif PY_VERSION < version.parse(\"3.8\"):\r\n import importlib_metadata\r\nelse:\r\n import importlib.metadata as impor... |
1,867,035,522 | 6,181 | Fix import in `image_load` doc | closed | Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168) | true | 2023-08-25T13:12:19Z | 2023-08-25T16:12:46Z | 2023-08-25T16:02:24Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6181 | 2023-08-25T16:02:24Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6181 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,867,032,578 | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | closed | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | true | 2023-08-25T13:10:26Z | 2023-08-25T16:58:02Z | 2023-08-25T16:46:22Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6180 | 2023-08-25T16:46:22Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6180 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,867,009,016 | 6,179 | Map cache with tokenizer | open | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```... | true | 2023-08-25T12:55:18Z | 2023-08-31T15:17:24Z | null | jonathanasdf | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6179 | false | [
"https://github.com/huggingface/datasets/issues/5147 may be a solution, by passing in the tokenizer in a fn_kwargs and ignoring it in the fingerprint calculations",
"I have a similar issue. I was using a Jupyter Notebook and every time I call the map function it performs tokenization from scratch again although t... |
1,866,610,102 | 6,178 | 'import datasets' throws "invalid syntax error" | closed | ### Describe the bug
Hi,
I have been trying to import the datasets library but I keep gtting this error.
`Traceback (most recent call last):
File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
... | true | 2023-08-25T08:35:14Z | 2023-09-27T17:33:39Z | 2023-09-27T17:33:39Z | elia-ashraf | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6178 | false | [
"This seems to be related to your environment and not the `datasets` code (e.g., this could happen when exposing the Python 3.9 site packages to a lower Python version (interpreter))"
] |
1,865,490,962 | 6,177 | Use object detection images from `huggingface/documentation-images` | closed | true | 2023-08-24T16:16:09Z | 2023-08-25T16:30:00Z | 2023-08-25T16:21:17Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6177 | 2023-08-25T16:21:17Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6177 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,864,436,408 | 6,176 | how to limit the size of memory mapped file? | open | ### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ... | true | 2023-08-24T05:33:45Z | 2023-10-11T06:00:10Z | null | williamium3000 | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6176 | false | [
"Hi! Can you share the error this reproducer throws in your environment? `streaming=True` streams the dataset as it's iterated over without creating a memory-map file.",
"The trace of the error. Streaming works but is slower.\r\n```\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-08-24_06:06... |
1,863,592,678 | 6,175 | PyArrow 13 CI fixes | closed | Fixes:
* bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed)
* aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always ... | true | 2023-08-23T15:45:53Z | 2023-08-25T13:15:59Z | 2023-08-25T13:06:52Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6175 | 2023-08-25T13:06:52Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6175 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,863,422,065 | 6,173 | Fix CI for pyarrow 13.0.0 | closed | pyarrow 13.0.0 just came out
```
FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
```
FAILED tests/test_table.py::test_cast_sliced_fi... | true | 2023-08-23T14:11:20Z | 2023-08-25T13:06:53Z | 2023-08-25T13:06:53Z | lhoestq | MEMBER | null | null | 0 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6173 | false | [] |
1,863,318,027 | 6,172 | Make Dataset streaming queries retryable | open | ### Feature request
Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to ... | true | 2023-08-23T13:15:38Z | 2023-11-06T13:54:16Z | null | rojagtap | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6172 | false | [
"Hi! The streaming mode also retries requests - `datasets.config.STREAMING_READ_MAX_RETRIES` (20 sec by default) controls the number of retries and `datasets.config.STREAMING_READ_RETRY_INTERVAL` (5 sec) the sleep time between retries.\r\n\r\n> At step 1800 I got a 504 HTTP status code error from Huggingface hub fo... |
1,862,922,767 | 6,171 | Fix typo in about_mapstyle_vs_iterable.mdx | closed | true | 2023-08-23T09:21:11Z | 2023-08-23T09:32:59Z | 2023-08-23T09:21:19Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6171 | 2023-08-23T09:21:19Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6171 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6171). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,862,705,731 | 6,170 | feat: Return the name of the currently loaded file | open | Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/js... | true | 2023-08-23T07:08:17Z | 2023-08-29T12:41:05Z | null | Amitesh-Patel | NONE | https://github.com/huggingface/datasets/pull/6170 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6170 | true | [
"Your change adds a new element in the key used to avoid duplicates when generating the examples of a dataset. I don't think it fixes the issue you're trying to solve."
] |
1,862,360,199 | 6,169 | Configurations in yaml not working | open | ### Dataset configurations cannot be created in YAML/README
Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118
I have t... | true | 2023-08-23T00:13:22Z | 2023-08-23T15:35:31Z | null | tsor13 | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6169 | false | [
"Unfortunately, I cannot reproduce this behavior on my machine or Colab - the reproducer returns `['main_data', 'additional_data']` as expected.",
"Thank you for looking into this, Mario. Is this on [my repository](https://huggingface.co/datasets/tsor13/test), or on another one that you have reproduced? Would you... |
1,861,867,274 | 6,168 | Fix ArrayXD YAML conversion | closed | Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion.
Fix #6112 | true | 2023-08-22T17:02:54Z | 2023-12-12T15:06:59Z | 2023-12-12T15:00:43Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6168 | 2023-12-12T15:00:43Z | 6 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6168 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6168). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,861,474,327 | 6,167 | Allow hyphen in split name | closed | To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276. | true | 2023-08-22T13:30:59Z | 2024-01-11T06:31:31Z | 2023-08-22T15:38:53Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6167 | null | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6167 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,861,259,055 | 6,166 | Document BUILDER_CONFIG_CLASS | closed | Related to https://github.com/huggingface/datasets/issues/6130 | true | 2023-08-22T11:27:41Z | 2023-08-23T14:01:25Z | 2023-08-23T13:52:36Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6166 | 2023-08-23T13:52:36Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6166 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,861,124,284 | 6,165 | Fix multiprocessing with spawn in iterable datasets | closed | The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems.
This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers.
I fixed the issue by replacing lambda and local methods which are not pickle-abl... | true | 2023-08-22T10:07:23Z | 2023-08-29T13:27:14Z | 2023-08-29T13:18:11Z | bruno-hays | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6165 | 2023-08-29T13:18:11Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6165 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq \r\nA test is failing, but I don't think it is due to my changes",
"Good catch ! Could you add a test to make sure transformed IterableDataset objects are still picklable ?\r\n\r\nSomething like `test_pickle_after_many_tra... |
1,859,560,007 | 6,164 | Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README | closed | When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with
```
File "app.py", line 123, in add_new_eval
eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT)
File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arro... | true | 2023-08-21T14:57:54Z | 2023-08-21T16:27:05Z | 2023-08-21T16:18:26Z | clefourrier | MEMBER | https://github.com/huggingface/datasets/pull/6164 | 2023-08-21T16:18:26Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6164 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,857,682,241 | 6,163 | Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 | open | ### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte... | true | 2023-08-19T11:34:40Z | 2023-08-21T13:28:16Z | null | shishirCTC | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6163 | false | [
"Answered on the forum [here](https://discuss.huggingface.co/t/error-type-arrowinvalid-details-failed-to-parse-string-254-254-as-a-scalar-of-type-int32/51323)."
] |
1,856,198,342 | 6,162 | load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields | open | ### Describe the bug
When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**.
When deleting that line the loading... | true | 2023-08-18T07:19:39Z | 2023-08-18T17:00:35Z | null | rbrugaro | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6162 | false | [
"Hi ! Feel free to open a discussion at https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T/discussions to ask the file to be fixed (or directly open a PR with the fixed file)\r\n\r\n`datasets` expects all the examples to have the same fields",
"@lhoestq I think the problem is caused by the fact th... |
1,855,794,354 | 6,161 | Fix protocol prefix for Beam | closed | Fix #6147 | true | 2023-08-17T22:40:37Z | 2024-03-18T17:01:21Z | 2024-03-18T17:01:21Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6161 | null | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6161 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,855,760,543 | 6,160 | Fix Parquet loading with `columns` | closed | Fix #6149 | true | 2023-08-17T21:58:24Z | 2023-08-17T22:44:59Z | 2023-08-17T22:36:04Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6160 | 2023-08-17T22:36:04Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6160 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,855,691,512 | 6,159 | Add `BoundingBox` feature | open | ... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explain... | true | 2023-08-17T20:49:51Z | 2024-11-18T17:58:43Z | null | mariosasko | COLLABORATOR | null | null | 1 | 2 | 1 | 1 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6159 | false | [
"My proposal would look like this:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom datasets.features import Sequence, BboxFeature\r\n\r\n# load images along with boxes\r\ndataset = load_dataset(\"imagefolder\", data_dir=\"/path/to/folder\", split=\"train\")\r\n\r\n# map the boxes column to the appropr... |
1,855,374,220 | 6,158 | [docs] Complete `to_iterable_dataset` | closed | Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide. | true | 2023-08-17T17:02:11Z | 2023-08-17T19:24:20Z | 2023-08-17T19:13:15Z | stevhliu | MEMBER | https://github.com/huggingface/datasets/pull/6158 | 2023-08-17T19:13:15Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6158 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,855,265,663 | 6,157 | DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding' | closed | ### Describe the bug
When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked
```python
---------------------------------------------------------------------------
TypeErr... | true | 2023-08-17T15:48:11Z | 2023-09-27T17:36:14Z | 2023-09-27T17:36:14Z | aihao2000 | NONE | null | null | 13 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6157 | false | [
"Thanks for reporting, but we can only fix this issue if you can provide a reproducer that consistently reproduces it.",
"@mariosasko Ok. What exactly does it mean to provide a reproducer",
"To provide a code that reproduces the issue :)",
"@mariosasko I complete the above code, is it enough?",
"@mariosasko... |
1,854,768,618 | 6,156 | Why not use self._epoch as seed to shuffle in distributed training with IterableDataset | closed | ### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question ... | true | 2023-08-17T10:58:20Z | 2023-08-17T14:33:15Z | 2023-08-17T14:33:14Z | npuichigo | CONTRIBUTOR | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6156 | false | [
"@lhoestq ",
"`_effective_generator` returns a RNG that takes into account `self._epoch` and the current dataset's base shuffling RNG (which can be set by specifying `seed=` in `.shuffle() for example`).\r\n\r\nTo fix your error you can pass `seed=` to `.shuffle()`. And the shuffling will depend on both this seed... |
1,854,661,682 | 6,155 | Raise FileNotFoundError when passing data_files that don't exist | closed | e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")` | true | 2023-08-17T09:49:48Z | 2023-08-18T13:45:58Z | 2023-08-18T13:35:13Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6155 | 2023-08-18T13:35:13Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6155 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,854,595,943 | 6,154 | Use yaml instead of get data patterns when possible | closed | This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use.
fix https://github.com/huggingface/datasets/issues/6140 | true | 2023-08-17T09:17:05Z | 2023-08-17T20:46:25Z | 2023-08-17T20:37:19Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6154 | 2023-08-17T20:37:19Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6154 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,852,494,646 | 6,152 | FolderBase Dataset automatically resolves under current directory when data_dir is not specified | closed | ### Describe the bug
FolderBase Dataset automatically resolves under current directory when data_dir is not specified.
For example:
```
load_dataset("audiofolder")
```
takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https:... | true | 2023-08-16T04:38:09Z | 2025-06-18T14:18:42Z | 2025-06-18T14:18:42Z | npuichigo | CONTRIBUTOR | null | null | 19 | 0 | 0 | 0 | null | false | [
"good first issue"
] | https://github.com/huggingface/datasets/issues/6152 | false | [
"@lhoestq ",
"Makes sense, I guess this can be fixed in the load_dataset_builder method.\r\nIt concerns every packaged builder I think (see values in `_PACKAGED_DATASETS_MODULES`)",
"I think the behavior is related to these lines, which short circuited the error handling.\r\nhttps://github.com/huggingface/datas... |
1,851,497,818 | 6,151 | Faster sorting for single key items | closed | ### Feature request
A faster way to sort a dataset which contains a large number of rows.
### Motivation
The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps.
**Code snippet:**
```python
ds = datasets.load_dataset( "json"... | true | 2023-08-15T14:02:31Z | 2023-08-21T14:38:26Z | 2023-08-21T14:38:25Z | jackapbutler | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6151 | false | [
"`Dataset.sort` essentially does the same thing except it uses `pyarrow.compute.sort_indices` which doesn't involve copying the data into python objects (saving memory)\r\n\r\n```python\r\nsort_keys = [(col, \"ascending\") for col in column_names]\r\nindices = pc.sort_indices(self.data, sort_keys=sort_keys)\r\nretu... |
1,850,740,456 | 6,150 | Allow dataset implement .take | open | ### Feature request
I want to do:
```
dataset.take(512)
```
but it only works with streaming = True
### Motivation
uniform interface to data sets. Really surprising the above only works with streaming = True.
### Your contribution
Should be trivial to copy paste the IterableDataset .take to use the local pa... | true | 2023-08-15T00:17:51Z | 2023-08-17T13:49:37Z | null | brando90 | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6150 | false | [
"```\r\n dataset = IterableDataset(dataset) if type(dataset) != IterableDataset else dataset # to force dataset.take(batch_size) to work in non-streaming mode\r\n ```\r\n",
"hf discuss: https://discuss.huggingface.co/t/how-does-one-make-dataset-take-512-work-with-streaming-false-with-hugging-face-data-set/5... |
1,850,700,624 | 6,149 | Dataset.from_parquet cannot load subset of columns | closed | ### Describe the bug
When using `Dataset.from_parquet(path_or_paths, columns=[...])` and a subset of columns, loading fails with a variant of the following
```
ValueError: Couldn't cast
a: int64
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 273
to
{'a': V... | true | 2023-08-14T23:28:22Z | 2023-08-17T22:36:05Z | 2023-08-17T22:36:05Z | dwyatte | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6149 | false | [
"Looks like this regression was introduced in `datasets==2.13.0` (`2.12.0` could load a subset of columns)\r\n\r\nThis does not appear to be fixed by https://github.com/huggingface/datasets/pull/6045 (bug still exists on `main`)"
] |
1,849,524,683 | 6,148 | Ignore parallel warning in map_nested | closed | This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested`
```
parallel_map is experimental and might be subject to breaking changes in the future
```
This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_ba... | true | 2023-08-14T10:43:41Z | 2023-08-17T08:54:06Z | 2023-08-17T08:43:58Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6148 | 2023-08-17T08:43:58Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6148 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,848,914,830 | 6,147 | ValueError when running BeamBasedBuilder with GCS path in cache_dir | closed | ### Describe the bug
When running the BeamBasedBuilder with a GCS path specified in the cache_dir, the following ValueError occurs:
```
ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path spec... | true | 2023-08-14T03:11:34Z | 2024-03-18T16:59:15Z | 2024-03-18T16:59:14Z | ktrk115 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6147 | false | [
"The cause of the error seems to be that `datasets` adds \"gcs://\" as a schema, while `beam` checks only \"gs://\".\r\n\r\ndatasets: https://github.com/huggingface/datasets/blob/c02a44715c036b5261686669727394b1308a3a4b/src/datasets/builder.py#L822\r\n\r\nbeam: [link](https://github.com/apache/beam/blob/25e1a64641b... |
1,848,417,366 | 6,146 | DatasetGenerationError when load glue benchmark datasets from `load_dataset` | closed | ### Describe the bug
Package version: datasets-2.14.4
When I run the codes:
```
from datasets import load_dataset
dataset = load_dataset("glue", "ax")
```
I got the following errors:
---------------------------------------------------------------------------
SchemaInferenceError ... | true | 2023-08-13T05:17:56Z | 2023-08-26T22:09:09Z | 2023-08-26T22:09:09Z | yusx-swapp | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6146 | false | [
"I've tried clear the .cache file, doesn't work.",
"This issue happens on AWS sagemaker",
"This issue can happen if there is a directory named \"glue\" relative to the Python script with the `load_dataset` call (similar issue to this one: https://github.com/huggingface/datasets/issues/5228). Is this the case?",... |
1,852,630,074 | 6,153 | custom load dataset to hub | closed | ### System Info
kaggle notebook
i transformed dataset:
```
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
```
to
formatted_dataset:
```
Dataset({
features: ['message_tree_id', 'message_tree_text'],
num_rows: 33143
})
```
but would like to know how to upload to hub
### ... | true | 2023-08-13T04:42:22Z | 2023-11-21T11:50:28Z | 2023-10-08T17:04:16Z | andysingal | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6153 | false | [
"This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).",
"> This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).\r\n\r\nThanks @sgugger , I guess I will wait for them to address the issue . Looking forward to hearing from them ",
"You can use `.push_to_... |
1,847,811,310 | 6,145 | Export to_iterable_dataset to document | closed | Fix the export of a missing method of `Dataset` | true | 2023-08-12T07:00:14Z | 2023-08-15T17:04:01Z | 2023-08-15T16:55:24Z | npuichigo | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6145 | 2023-08-15T16:55:24Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6145 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,847,296,711 | 6,144 | NIH exporter file not found | open | ### Describe the bug
can't use or download the nih exporter pile data.
```
15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()
16 File "/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py", line 474, in experiment_compute_diveri... | true | 2023-08-11T19:05:25Z | 2023-08-14T23:28:38Z | null | brando90 | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6144 | false | [
"related: https://github.com/huggingface/datasets/issues/3504",
"another file not found:\r\n```\r\nTraceback (most recent call last):\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 417, in _info\r\n await _file_info(\r\n File ... |
1,846,205,216 | 6,142 | the-stack-dedup fails to generate | closed | ### Describe the bug
I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens.
### Steps to reproduce the bug
My code:
```
import os
import datasets as ds
MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local"
MY_TOKEN="my-token"
the_stack_ds = ds.load_dataset("... | true | 2023-08-11T05:10:49Z | 2023-08-17T09:26:13Z | 2023-08-17T09:26:13Z | michaelroyzen | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6142 | false | [
"@severo ",
"It seems that some parquet files have additional columns.\r\n\r\nI ran a scan and found that two files have the additional `__id__` column:\r\n\r\n1. `hf://datasets/bigcode/the-stack-dedup/data/numpy/data-00000-of-00001.parquet`\r\n2. `hf://datasets/bigcode/the-stack-dedup/data/omgrofl/data-00000-of-... |
1,846,117,729 | 6,141 | TypeError: ClientSession._request() got an unexpected keyword argument 'https' | closed | ### Describe the bug
Hello, when I ran the [code snippet](https://huggingface.co/docs/datasets/v2.14.4/en/loading#json) on the document, I encountered the following problem:
```
Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more informatio... | true | 2023-08-11T02:40:32Z | 2023-08-30T13:51:33Z | 2023-08-30T13:51:33Z | q935970314 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6141 | false | [
"Hi! I cannot reproduce this error on my machine or in Colab. Which version of `fsspec` do you have installed?"
] |
1,845,384,712 | 6,140 | Misalignment between file format specified in configs metadata YAML and the inferred builder | closed | There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV):
```yaml
configs:
- config_name: default
data_files:
- split: train
path: data.csv
```
and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not... | true | 2023-08-10T15:07:34Z | 2023-08-17T20:37:20Z | 2023-08-17T20:37:20Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6140 | false | [] |
1,844,991,583 | 6,139 | Offline dataset viewer | closed | ### Feature request
The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something t... | true | 2023-08-10T11:30:00Z | 2024-09-24T18:36:35Z | 2023-09-29T13:10:22Z | yuvalkirstain | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [
"enhancement",
"dataset-viewer"
] | https://github.com/huggingface/datasets/issues/6139 | false | [
"Hi, thanks for the suggestion. It's not possible at the moment. The viewer is part of the Hub codebase and only works on public datasets. Also, it relies on [Datasets Server](https://github.com/huggingface/datasets-server/), which prepares the data and provides an API to access the rows, size, etc.\r\n\r\nIf you'r... |
1,844,952,496 | 6,138 | Ignore CI lint rule violation in Pickler.memoize | closed | This PR ignores the violation of the lint rule E721 in `Pickler.memoize`.
The lint rule violation was introduced in this PR:
- #3182
@lhoestq is there a reason you did not use `isinstance` instead?
As a hotfix, we just ignore the violation of the lint rule.
Fix #6136. | true | 2023-08-10T11:03:15Z | 2023-08-10T11:31:45Z | 2023-08-10T11:22:56Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6138 | 2023-08-10T11:22:56Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6138 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,844,952,312 | 6,137 | (`from_spark()`) Unable to connect HDFS in pyspark YARN setting | open | ### Describe the bug
related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613
---
Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library tha... | true | 2023-08-10T11:03:08Z | 2023-08-10T11:03:08Z | null | kyoungrok0517 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6137 | false | [] |
1,844,887,866 | 6,136 | CI check_code_quality error: E721 Do not compare types, use `isinstance()` | closed | After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error:
```
src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()`
``` | true | 2023-08-10T10:19:50Z | 2023-08-10T11:22:58Z | 2023-08-10T11:22:58Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/6136 | false | [] |
1,844,870,943 | 6,135 | Remove unused allowed_extensions param | closed | This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`. | true | 2023-08-10T10:09:54Z | 2023-08-10T12:08:38Z | 2023-08-10T12:00:02Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6135 | 2023-08-10T12:00:01Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6135 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,844,535,142 | 6,134 | `datasets` cannot be installed alongside `apache-beam` | closed | ### Describe the bug
If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something s... | true | 2023-08-10T06:54:32Z | 2023-09-01T03:19:49Z | 2023-08-10T15:22:10Z | boyleconnor | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6134 | false | [
"I noticed that this is actually covered by issue #5613, which for some reason I didn't see when I searched the issues in this repo the first time."
] |
1,844,511,519 | 6,133 | Dataset is slower after calling `to_iterable_dataset` | open | ### Describe the bug
Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset`
### Steps to reproduce the bug
Any dataset after converting to `IterableDataset`
### Expected behavior
Maybe it should be faster on big dataset? I only test on small... | true | 2023-08-10T06:36:23Z | 2023-08-16T09:18:54Z | null | npuichigo | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6133 | false | [
"@lhoestq ",
"It's roughly the same code between the two so we can expected roughly the same speed, could you share a benchmark ?"
] |
1,843,491,020 | 6,132 | to_iterable_dataset is missing in document | closed | ### Describe the bug
to_iterable_dataset is missing in document
### Steps to reproduce the bug
to_iterable_dataset is missing in document
### Expected behavior
document enhancement
### Environment info
unrelated | true | 2023-08-09T15:15:03Z | 2023-08-16T04:43:36Z | 2023-08-16T04:43:29Z | npuichigo | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6132 | false | [
"Fixed with PR"
] |
1,843,158,846 | 6,130 | default config name doesn't work when config kwargs are specified. | closed | ### Describe the bug
https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522
If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset ... | true | 2023-08-09T12:43:15Z | 2023-11-22T11:50:49Z | 2023-11-22T11:50:48Z | npuichigo | CONTRIBUTOR | null | null | 15 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6130 | false | [
"@lhoestq ",
"What should be the behavior in this case ? Should it override the default config with the added parameter ?",
"I know why it should be treated as a new config if overriding parameters are passed. But in some case, I just pass in some common fields like `data_dir`.\r\n\r\nFor example, I want to ext... |
1,841,563,517 | 6,129 | Release 2.14.4 | closed | true | 2023-08-08T15:43:56Z | 2023-08-08T16:08:22Z | 2023-08-08T15:49:06Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6129 | 2023-08-08T15:49:06Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6129 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,841,545,493 | 6,128 | IndexError: Invalid key: 88 is out of bounds for size 0 | closed | ### Describe the bug
This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib.
### Steps to reproduce the bug
I use the following code to fine-tune Falcon on my private dataset.
```python
import transformers
from transformers import (
AutoModelForCausalLM... | true | 2023-08-08T15:32:08Z | 2023-12-26T07:51:57Z | 2023-08-11T13:35:09Z | TomasAndersonFang | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6128 | false | [
"Hi @TomasAndersonFang,\r\n\r\nHave you tried instead to use `torch_compile` in `transformers.TrainingArguments`? https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.torch_compile",
"> \r\n\r\nI tried this and got the following error:\r\n\r\n```\r\nTraceback (mo... |
1,839,746,721 | 6,127 | Fix authentication issues | closed | This PR fixes 3 authentication issues:
- Fix authentication when passing `token`.
- Fix authentication in `Audio.decode_example` and `Image.decode_example`.
- Fix authentication to resolve `data_files` in repositories without script.
This PR also fixes our CI so that we properly test when passing `token` and we d... | true | 2023-08-07T15:41:25Z | 2023-08-08T15:24:59Z | 2023-08-08T15:16:22Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6127 | 2023-08-08T15:16:22Z | 8 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6127 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,839,675,320 | 6,126 | Private datasets do not load when passing token | closed | ### Describe the bug
Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`.
This is a non-planned backward incompatible breaking change.
Note that private datasets do load if instead `download_config` is passed:
```python
from datasets i... | true | 2023-08-07T15:06:47Z | 2023-08-08T15:16:23Z | 2023-08-08T15:16:23Z | albertvillanova | MEMBER | null | null | 4 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/6126 | false | [
"Our CI did not catch this issue because with current implementation, stored token in `HfFolder` (which always exists) is used by default.",
"I can confirm this and have the same problem (and just went almost crazy because I couldn't figure out the source of this problem because on another computer everything wor... |
1,837,980,986 | 6,125 | Reinforcement Learning and Robotics are not task categories in HF datasets metadata | closed | ### Describe the bug
In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets
Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags
Moreover we see some older datasets that do have that tag, bu... | true | 2023-08-05T23:59:42Z | 2023-08-18T12:28:42Z | 2023-08-18T12:28:42Z | StoneT2000 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6125 | false | [] |
1,837,868,112 | 6,124 | Datasets crashing runs due to KeyError | closed | ### Describe the bug
Hi all,
I have been running into a pretty persistent issue recently when trying to load datasets.
```python
train_dataset = load_dataset(
'llama-2-7b-tokenized',
split = 'train'
)
```
I receive a KeyError which crashes the runs.
```
Traceback (most recent call... | true | 2023-08-05T17:48:56Z | 2023-11-30T16:28:57Z | 2023-11-30T16:28:57Z | conceptofmind | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6124 | false | [
"i once had the same error and I could fix that by pushing a fake or a dummy commit on my hugging face dataset repo",
"Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?",
"> Hi! We need a reproducer to fix this. Can you provide a link to the dataset (if it's public)?\... |
1,837,789,294 | 6,123 | Inaccurate Bounding Boxes in "wildreceipt" Dataset | closed | ### Describe the bug
I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, n... | true | 2023-08-05T14:34:13Z | 2023-08-17T14:25:27Z | 2023-08-17T14:25:26Z | HamzaGbada | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6123 | false | [
"Hi! Thanks for the investigation, but we are not the authors of these datasets, so please report this on the Hub instead so that the actual authors can fix it."
] |
1,837,335,721 | 6,122 | Upload README via `push_to_hub` | closed | ### Feature request
`push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually.
However, I do discover snippets to intialize a README for every `push_to_hub`:
```
dataset_card = (
DatasetCard(
... | true | 2023-08-04T21:00:27Z | 2023-08-21T18:18:54Z | 2023-08-21T18:18:54Z | liyucheng09 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6122 | false | [
"You can use `huggingface_hub`'s [Card API](https://huggingface.co/docs/huggingface_hub/package_reference/cards) to programmatically push a dataset card to the Hub."
] |
1,836,761,712 | 6,121 | Small typo in the code example of create imagefolder dataset | closed | Fix type of code example of load imagefolder dataset | true | 2023-08-04T13:36:59Z | 2023-08-04T13:45:32Z | 2023-08-04T13:41:43Z | WangXin93 | NONE | https://github.com/huggingface/datasets/pull/6121 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6121 | true | [
"Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin"
] |
1,836,026,938 | 6,120 | Lookahead streaming support? | open | ### Feature request
From what I understand, streaming dataset currently pulls the data, and process the data as it is requested.
This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.
While the delays might be dataset specific (or even mappi... | true | 2023-08-04T04:01:52Z | 2023-08-17T17:48:42Z | null | PicoCreator | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6120 | false | [
"In which format is your dataset? We could expose the `pre_buffer` flag for Parquet to use PyArrow's background thread pool to speed up loading. "
] |
1,835,996,350 | 6,119 | [Docs] Add description of `select_columns` to guide | closed | Closes #6116 | true | 2023-08-04T03:13:30Z | 2023-08-16T10:13:02Z | 2023-08-16T10:02:52Z | unifyh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6119 | 2023-08-16T10:02:52Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6119 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,835,940,417 | 6,118 | IterableDataset.from_generator() fails with pickle error when provided a generator or iterator | open | ### Describe the bug
**Description**
Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator.
**Code example**
```
def line_generator(files: List[Path]):
if isinstance(f... | true | 2023-08-04T01:45:04Z | 2024-12-18T18:30:57Z | null | finkga | NONE | null | null | 3 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6118 | false | [
"Hi! `IterableDataset.from_generator` expects a generator function, not the object (to be consistent with `Dataset.from_generator`).\r\n\r\nYou can fix the above snippet as follows:\r\n```python\r\ntrain_dataset = IterableDataset.from_generator(line_generator, fn_kwargs={\"files\": model_training_files})\r\n```",
... |
1,835,213,848 | 6,117 | Set dev version | closed | true | 2023-08-03T14:46:04Z | 2023-08-03T14:56:59Z | 2023-08-03T14:46:18Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6117 | 2023-08-03T14:46:18Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6117 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6117). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,835,098,484 | 6,116 | [Docs] The "Process" how-to guide lacks description of `select_columns` function | closed | ### Feature request
The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the gui... | true | 2023-08-03T13:45:10Z | 2023-08-16T10:02:53Z | 2023-08-16T10:02:53Z | unifyh | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6116 | false | [
"Great idea, feel free to open a PR! :)"
] |
1,834,765,485 | 6,115 | Release: 2.14.3 | closed | true | 2023-08-03T10:18:32Z | 2023-08-03T15:08:02Z | 2023-08-03T10:24:57Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6115 | 2023-08-03T10:24:57Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6115 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | |
1,834,015,584 | 6,114 | Cache not being used when loading commonvoice 8.0.0 | closed | ### Describe the bug
I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 ins... | true | 2023-08-02T23:18:11Z | 2023-08-18T23:59:00Z | 2023-08-18T23:59:00Z | clabornd | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6114 | false | [
"You can avoid this by using the `revision` parameter in `load_dataset` to always force downloading a specific commit (if not specified it defaults to HEAD, hence the redownload).",
"Thanks @mariosasko this works well, looks like I should have read the documentation a bit more carefully. \r\n\r\nIt is still a bi... |
1,833,854,030 | 6,113 | load_dataset() fails with streamlit caching inside docker | closed | ### Describe the bug
When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message:
EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files
Traceback:
Fil... | true | 2023-08-02T20:20:26Z | 2023-08-21T18:18:27Z | 2023-08-21T18:18:27Z | fierval | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6113 | false | [
"Hi! This should be fixed in the latest (patch) release (run `pip install -U datasets` to install it). This behavior was due to a bug in our authentication logic."
] |
1,833,693,299 | 6,112 | yaml error using push_to_hub with generated README.md | closed | ### Describe the bug
When I construct a dataset with the following features:
```
features = Features(
{
"pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)),
"input_ids": Sequence(feature=Value(dtype="int64")),
"attention_mask": Sequence(Value(dtype="int64")),
"token... | true | 2023-08-02T18:21:21Z | 2023-12-12T15:00:44Z | 2023-12-12T15:00:44Z | kevintee | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6112 | false | [
"Thanks for reporting! This is a bug in converting the `ArrayXD` types to YAML. It will be fixed soon."
] |
1,832,781,654 | 6,111 | raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." ) | closed | ### Describe the bug
For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for exa... | true | 2023-08-02T09:17:29Z | 2023-08-29T02:00:28Z | 2023-08-29T02:00:28Z | 2catycm | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6111 | false | [
"any idea?",
"This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n\r\n`load_from_disk` is intended to be used on directories created with `Dataset.save_to_disk` or `DatasetDict.save_to_disk`",
"> This should work: `load_dataset(\"path/to/downloaded_repo\")`\r\n> \r\n> `load_from_disk` is intended t... |
1,831,110,633 | 6,110 | [BUG] Dataset initialized from in-memory data does not create cache. | closed | ### Describe the bug
`Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`.
### Steps to reproduce the bug
```python
# below code was ru... | true | 2023-08-01T11:58:58Z | 2023-08-17T14:03:01Z | 2023-08-17T14:03:00Z | MattYoon | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6110 | false | [
"This is expected behavior. You must provide `cache_file_name` when performing `.map` on an in-memory dataset for the result to be cached."
] |
1,830,753,793 | 6,109 | Problems in downloading Amazon reviews from HF | closed | ### Describe the bug
I have a script downloading `amazon_reviews_multi`.
When the download starts, I get
```
Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]
Downloading data: 243B [00:00, 1.43MB/s]
Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it]
Extracting data files: 100%... | true | 2023-08-01T08:38:29Z | 2024-06-25T13:48:38Z | 2023-08-02T07:12:07Z | 610v4nn1 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6109 | false | [
"Thanks for reporting, @610v4nn1.\r\n\r\nIndeed, the source data files are no longer available. We have contacted the authors of the dataset and they report that Amazon has decided to stop distributing the multilingual reviews dataset.\r\n\r\nWe are adding a notification about this issue to the dataset card.\r\n\r\... |
1,830,347,187 | 6,108 | Loading local datasets got strangely stuck | open | ### Describe the bug
I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:
```python
ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=... | true | 2023-08-01T02:28:06Z | 2024-12-31T16:01:00Z | null | LoveCatc | NONE | null | null | 7 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6108 | false | [
"Yesterday I waited for more than 12 hours to make sure it was really **stuck** instead of proceeding too slow.",
"I've had similar weird issues with `load_dataset` as well. Not multiple files, but dataset is quite big, about 50G.",
"We use a generic multiprocessing code, so there is little we can do about this... |
1,829,625,320 | 6,107 | Fix deprecation of use_auth_token in file_utils | closed | Fix issues with the deprecation of `use_auth_token` introduced by:
- #5996
in functions:
- `get_authentication_headers_for_url`
- `request_etag`
- `get_from_cache`
Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588
```
FAILED tes... | true | 2023-07-31T16:32:01Z | 2023-08-03T10:13:32Z | 2023-08-03T10:04:18Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6107 | 2023-08-03T10:04:18Z | 3 | 1 | 1 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6107 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,829,131,223 | 6,106 | load local json_file as dataset | closed | ### Describe the bug
I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type.
### Steps to reproduce the bug
1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)`
2. Then, the error will be trigg... | true | 2023-07-31T12:53:49Z | 2023-08-18T01:46:35Z | 2023-08-18T01:46:35Z | CiaoHe | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6106 | false | [
"Hi! We use PyArrow to read JSON files, and PyArrow doesn't allow different value types in the same column. #5776 should address this.\r\n\r\nIn the meantime, you can combine `Dataset.from_generator` with the above code to cast the values to the same type. ",
"Thanks for your help!"
] |
1,829,008,430 | 6,105 | Fix error when loading from GCP bucket | closed | Fix `resolve_pattern` for filesystems with tuple protocol.
Fix #6100.
The bug code lines were introduced by:
- #6028 | true | 2023-07-31T11:44:46Z | 2023-08-01T10:48:52Z | 2023-08-01T10:38:54Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6105 | 2023-08-01T10:38:54Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6105 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,828,959,107 | 6,104 | HF Datasets data access is extremely slow even when in memory | open | ### Describe the bug
Doing a simple `some_dataset[:10]` can take more than a minute.
Profiling it:
<img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab">
`some_dataset` is completely in memory with no disk cache.
This is proving fat... | true | 2023-07-31T11:12:19Z | 2023-08-01T11:22:43Z | null | NightMachinery | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6104 | false | [
"Possibly related:\r\n- https://github.com/pytorch/pytorch/issues/22462"
] |
1,828,515,165 | 6,103 | Set dev version | closed | true | 2023-07-31T06:44:05Z | 2023-07-31T06:55:58Z | 2023-07-31T06:45:41Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6103 | 2023-07-31T06:45:41Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6103 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6103). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,828,494,896 | 6,102 | Release 2.14.2 | closed | true | 2023-07-31T06:27:47Z | 2023-07-31T06:48:09Z | 2023-07-31T06:32:58Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6102 | 2023-07-31T06:32:58Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6102 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | |
1,828,469,648 | 6,101 | Release 2.14.2 | closed | true | 2023-07-31T06:05:36Z | 2023-07-31T06:33:00Z | 2023-07-31T06:18:17Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6101 | 2023-07-31T06:18:17Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6101 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | |
1,828,118,930 | 6,100 | TypeError when loading from GCP bucket | closed | ### Describe the bug
Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1.
### Steps to reproduce the bug
Load any file from a GCP bucket:
```python
import datasets
datasets.load_dataset("json", data_f... | true | 2023-07-30T23:03:00Z | 2023-08-03T10:00:48Z | 2023-08-01T10:38:55Z | bilelomrani1 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6100 | false | [
"Thanks for reporting, @bilelomrani1.\r\n\r\nWe are fixing it. ",
"We have fixed it. We are planning to do a patch release today."
] |
1,827,893,576 | 6,099 | How do i get "amazon_us_reviews | closed | ### Feature request
I have been trying to load 'amazon_us_dataset" but unable to do so.
`amazon_us_reviews = load_dataset('amazon_us_reviews')`
`print(amazon_us_reviews)`
> [ValueError: Config name is missing.
Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1... | true | 2023-07-30T11:02:17Z | 2023-08-21T05:08:08Z | 2023-08-10T05:02:35Z | IqraBaluch | NONE | null | null | 10 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6099 | false | [
"Seems like the problem isn't with the library, but the dataset itself hosted on AWS S3.\r\n\r\nIts [homepage](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) returns an `AccessDenied` XML response, which is the same thing you get if you try to log the `record` that triggers the exception\r\n\r\n```python\... |
1,827,655,071 | 6,098 | Expanduser in save_to_disk() | closed | Fixes #5651. The same problem occurs when loading from disk so I fixed it there too.
I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`). | true | 2023-07-29T20:50:45Z | 2023-10-27T14:14:11Z | 2023-10-27T14:04:36Z | Unknown3141592 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6098 | 2023-10-27T14:04:36Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6098 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> I am not sure why the case distinction between local and remote filesystems is even necessary for DatasetDict when saving to disk. Imo this could be removed (leaving only fs.makedirs(dataset_dict_path, exist_ok=True)).\r\n\r\nIndee... |
1,827,054,143 | 6,097 | Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format | closed | ### Describe the bug
Hi team!
I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case ... | true | 2023-07-28T20:31:59Z | 2023-07-28T20:49:58Z | 2023-07-28T20:49:58Z | aschoenauer-sebag | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6097 | false | [
"Actually, my bad -- specifying\r\n```python\r\nfoo.set_format('numpy', ['vectors'], output_all_columns=True)\r\n```\r\nfixes it."
] |
1,826,731,091 | 6,096 | Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet` | closed | Hi to whoever is reading this! 🤗 (Most likely @mariosasko)
## What's in this PR?
This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as reque... | true | 2023-07-28T16:36:59Z | 2024-05-28T07:40:30Z | 2024-03-06T11:12:42Z | alvarobartt | MEMBER | https://github.com/huggingface/datasets/pull/6096 | 2024-03-06T11:12:42Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6096 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6096). All of your documentation changes will be reflected on that endpoint.",
"Hi here @lhoestq @mariosasko I just realised this PR is still open, should we close it in case this is something not to include within `datasets`, ... |
1,826,496,967 | 6,095 | Fix deprecation of errors in TextConfig | closed | This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by:
- #5974
```python
In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict")
---------------------------------------------------------------------------
TypeError Traceback (most ... | true | 2023-07-28T14:08:37Z | 2023-07-31T05:26:32Z | 2023-07-31T05:17:38Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6095 | 2023-07-31T05:17:38Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6095 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,826,293,414 | 6,094 | Fix deprecation of use_auth_token in DownloadConfig | closed | This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by:
- #5996
```python
In [1]: from datasets import DownloadConfig
In [2]: DownloadConfig(use_auth_token=False)
---------------------------------------------------------------------------
TypeError ... | true | 2023-07-28T11:52:21Z | 2023-07-31T05:08:41Z | 2023-07-31T04:59:50Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/6094 | 2023-07-31T04:59:50Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6094 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,826,210,490 | 6,093 | Deprecate `download_custom` | closed | Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead.
We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been req... | true | 2023-07-28T10:49:06Z | 2023-08-21T17:51:34Z | 2023-07-28T11:30:02Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6093 | 2023-07-28T11:30:02Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6093 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,826,111,806 | 6,092 | Minor fix in `iter_files` for hidden files | closed | Fix #6090 | true | 2023-07-28T09:50:12Z | 2023-07-28T10:59:28Z | 2023-07-28T10:50:10Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6092 | 2023-07-28T10:50:09Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6092 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,826,086,487 | 6,091 | Bump fsspec from 2021.11.1 to 2022.3.0 | closed | Fix https://github.com/huggingface/datasets/issues/6087
(Colab installs 2023.6.0, so we should be good) | true | 2023-07-28T09:37:15Z | 2023-07-28T10:16:11Z | 2023-07-28T10:07:02Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6091 | 2023-07-28T10:07:02Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6091 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,825,865,043 | 6,090 | FilesIterable skips all the files after a hidden file | closed | ### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manag... | true | 2023-07-28T07:25:57Z | 2023-07-28T10:51:14Z | 2023-07-28T10:50:11Z | dkrivosic | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6090 | false | [
"Thanks for reporting. We've merged a PR with a fix."
] |
1,825,761,476 | 6,089 | AssertionError: daemonic processes are not allowed to have children | open | ### Describe the bug
When I load_dataset with num_proc > 0 in a deamon process, I got an error:
```python
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract
return self.extract(self.download(url_or_urls))
^^^^^^^^^^^^^^^^^
File "/Users... | true | 2023-07-28T06:04:00Z | 2023-07-31T02:34:02Z | null | codingl2k1 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6089 | false | [
"We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).",
"> We could add a \"t... |
1,825,665,235 | 6,088 | Loading local data files initiates web requests | closed | As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by
```python
# Load a JSON file
from datasets import load_dataset
ds = load_dataset('json', data_files='path/to/local/my_... | true | 2023-07-28T04:06:26Z | 2023-07-28T05:02:22Z | 2023-07-28T05:02:22Z | lytning98 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6088 | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.