id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,825,133,741 | 6,087 | fsspec dependency is set too low | closed | ### Describe the bug
fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: ht... | true | 2023-07-27T20:08:22Z | 2023-07-28T10:07:56Z | 2023-07-28T10:07:03Z | iXce | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6087 | false | [
"Thanks for reporting! A PR with a fix has just been merged."
] |
1,825,009,268 | 6,086 | Support `fsspec` in `Dataset.to_<format>` methods | closed | Supporting this should be fairly easy.
Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353). | true | 2023-07-27T19:08:37Z | 2024-03-07T07:22:43Z | 2024-03-07T07:22:42Z | mariosasko | COLLABORATOR | null | null | 5 | 2 | 2 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6086 | false | [
"Hi @mariosasko unless someone's already working on it, I guess I can tackle it!",
"Hi! Sure, feel free to tackle this.",
"#self-assign",
"I'm assuming this should just cover `to_csv`, `to_parquet`, and `to_json`, right? As `to_list` and `to_dict` just return Python objects, `to_pandas` returns a `pandas.Data... |
1,824,985,188 | 6,085 | Fix `fsspec` download | open | Testing `ds = load_dataset("audiofolder", data_files="s3://datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz", storage_options={"anon": True})` and trying to fix the issues raised by `fsspec` ...
TODO: fix
```
self.session = aiobotocore.session.AioSession(**self.kwargs)
TypeError: __init__() got ... | true | 2023-07-27T18:54:47Z | 2023-07-27T19:06:13Z | null | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6085 | null | 3 | 0 | 0 | 0 | true | false | [] | https://github.com/huggingface/datasets/pull/6085 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,824,896,761 | 6,084 | Changing pixel values of images in the Winoground dataset | open | Hi, as I followed the instructions, with lasted "datasets" version:
"
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
"
I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slight... | true | 2023-07-27T17:55:35Z | 2023-07-27T17:55:35Z | null | ZitengWangNYU | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6084 | false | [] |
1,824,832,348 | 6,083 | set dev version | closed | true | 2023-07-27T17:10:41Z | 2023-07-27T17:22:05Z | 2023-07-27T17:11:01Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6083 | 2023-07-27T17:11:01Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6083 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6083). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,824,819,672 | 6,082 | Release: 2.14.1 | closed | true | 2023-07-27T17:05:54Z | 2023-07-31T06:32:16Z | 2023-07-27T17:08:38Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6082 | 2023-07-27T17:08:38Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6082 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6082). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,824,486,278 | 6,081 | Deprecate `Dataset.export` | closed | Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) t... | true | 2023-07-27T14:22:18Z | 2023-07-28T11:09:54Z | 2023-07-28T11:01:04Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6081 | 2023-07-28T11:01:04Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6081 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,822,667,554 | 6,080 | Remove README link to deprecated Colab notebook | closed | true | 2023-07-26T15:27:49Z | 2023-07-26T16:24:43Z | 2023-07-26T16:14:34Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6080 | 2023-07-26T16:14:34Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6080 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,822,597,471 | 6,079 | Iterating over DataLoader based on HF datasets is stuck forever | closed | ### Describe the bug
I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment.
I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What shou... | true | 2023-07-26T14:52:37Z | 2024-02-07T17:46:52Z | 2023-07-30T14:09:06Z | arindamsarkar93 | NONE | null | null | 15 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6079 | false | [
"When the process starts to hang, can you interrupt it with CTRL + C and paste the error stack trace here? ",
"Thanks @mariosasko for your prompt response, here's the stack trace:\r\n\r\n```\r\nKeyboardInterrupt Traceback (most recent call last)\r\nCell In[12], line 4\r\n 2 t = time.t... |
1,822,501,472 | 6,078 | resume_download with streaming=True | closed | ### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download f... | true | 2023-07-26T14:08:22Z | 2023-07-28T11:05:03Z | 2023-07-28T11:05:03Z | NicolasMICAUX | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6078 | false | [
"Currently, it's not possible to efficiently resume streaming after an error. Eventually, we plan to support this for Parquet (see https://github.com/huggingface/datasets/issues/5380). ",
"Ok thank you for your answer",
"I'm closing this as a duplicate of #5380"
] |
1,822,486,810 | 6,077 | Mapping gets stuck at 99% | open | ### Describe the bug
Hi !
I'm currently working with a large (~150GB) unnormalized dataset at work.
The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it.
I want to normalize the features of the dataset, ... | true | 2023-07-26T14:00:40Z | 2024-07-22T12:28:06Z | null | Laurent2916 | CONTRIBUTOR | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6077 | false | [
"The `MAX_MAP_BATCH_SIZE = 1_000_000_000` hack is bad as it loads the entire dataset into RAM when performing `.map`. Instead, it's best to use `.iter(batch_size)` to iterate over the data batches and compute `mean` for each column. (`stddev` can be computed in another pass).\r\n\r\nAlso, these arrays are big, so i... |
1,822,345,597 | 6,076 | No gzip encoding from github | closed | Don't accept gzip encoding from github, otherwise some files are not streamable + seekable.
fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84
and making sure https://github.com/huggingface/datasets/issues/2918 works as well | true | 2023-07-26T12:46:07Z | 2023-07-27T16:15:11Z | 2023-07-27T16:14:40Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6076 | 2023-07-27T16:14:40Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6076 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,822,341,398 | 6,075 | Error loading music files using `load_dataset` | closed | ### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/en... | true | 2023-07-26T12:44:05Z | 2023-07-26T13:08:08Z | 2023-07-26T13:08:08Z | susnato | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6075 | false | [
"This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.",
"I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!"
] |
1,822,299,128 | 6,074 | Misc doc improvements | closed | Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has b... | true | 2023-07-26T12:20:54Z | 2023-07-27T16:16:28Z | 2023-07-27T16:16:02Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6074 | 2023-07-27T16:16:02Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6074 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,822,167,804 | 6,073 | version2.3.2 load_dataset()data_files can't include .xxxx in path | closed | ### Describe the bug
First, I cd workdir.
Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"})
that couldn't work and
<FileNotFoundError: Unable to find
'/a/b/c/.d/train/train.jsonl' at
/a/b/c/.d/>
And I debug, it is fine in version2.1.2... | true | 2023-07-26T11:09:31Z | 2023-08-29T15:53:59Z | 2023-08-29T15:53:59Z | BUAAChuanWang | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6073 | false | [
"Version 2.3.2 is over one year old, so please use the latest release (2.14.0) to get the expected behavior. Version 2.3.2 does not contain some fixes we made to fix resolving hidden files/directories (starting with a dot)."
] |
1,822,123,560 | 6,072 | Fix fsspec storage_options from load_dataset | closed | close https://github.com/huggingface/datasets/issues/6071 | true | 2023-07-26T10:44:23Z | 2023-07-27T12:51:51Z | 2023-07-27T12:42:57Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6072 | 2023-07-27T12:42:57Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6072 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,821,990,749 | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | closed | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_sto... | true | 2023-07-26T09:37:20Z | 2023-07-27T12:42:58Z | 2023-07-27T12:42:58Z | exs-avianello | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6071 | false | [
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a... |
1,820,836,330 | 6,070 | Fix Quickstart notebook link | closed | Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt) | true | 2023-07-25T17:48:37Z | 2023-07-25T18:19:01Z | 2023-07-25T18:10:16Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6070 | 2023-07-25T18:10:16Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6070 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,820,831,535 | 6,069 | KeyError: dataset has no key "image" | closed | ### Describe the bug
I've loaded a local image dataset with:
`ds = laod_dataset("imagefolder", data_dir=path-to-data)`
And defined a transform to process the data, following the Datasets docs.
However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch ... | true | 2023-07-25T17:45:50Z | 2024-09-06T08:16:16Z | 2023-07-27T12:42:17Z | etetteh | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6069 | false | [
"You can list the dataset's columns with `ds.column_names` before `.map` to check whether the dataset has an `image` column. If it doesn't, then this is a bug. Otherwise, please paste the line with the `.map` call.\r\n\r\n\r\n",
"This is the piece of code I am running:\r\n```\r\ndata_transforms = utils.get_data_a... |
1,820,106,952 | 6,068 | fix tqdm lock deletion | closed | related to https://github.com/huggingface/datasets/issues/6066 | true | 2023-07-25T11:17:25Z | 2023-07-25T15:29:39Z | 2023-07-25T15:17:50Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6068 | 2023-07-25T15:17:50Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6068 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,819,919,025 | 6,067 | fix tqdm lock | closed | close https://github.com/huggingface/datasets/issues/6066 | true | 2023-07-25T09:32:16Z | 2023-07-25T10:02:43Z | 2023-07-25T09:54:12Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6067 | 2023-07-25T09:54:12Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6067 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,819,717,542 | 6,066 | AttributeError: '_tqdm_cls' object has no attribute '_lock' | closed | ### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-p... | true | 2023-07-25T07:24:36Z | 2023-07-26T10:56:25Z | 2023-07-26T10:56:24Z | codingl2k1 | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6066 | false | [
"Hi ! I opened https://github.com/huggingface/datasets/pull/6067 to add the missing `_lock`\r\n\r\nWe'll do a patch release soon, but feel free to install `datasets` from source in the meantime",
"I have tested the latest main, it does not work.\r\n\r\nI add more logs to reproduce this issue, it looks like a mult... |
1,819,334,932 | 6,065 | Add column type guessing from map return function | closed | As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset.
This PR provid... | true | 2023-07-25T00:34:17Z | 2023-07-26T15:13:45Z | 2023-07-26T15:13:44Z | piercefreeman | NONE | https://github.com/huggingface/datasets/pull/6065 | null | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6065 | true | [
"Thanks for working on this. However, having thought about this issue a bit more, supporting this doesn't seem like a good idea - it's better to be explicit than implicit, according to the Zen of Python 🙂. Also, I don't think many users would use this, so this raises the question of whether this is something we wa... |
1,818,703,725 | 6,064 | set dev version | closed | true | 2023-07-24T15:56:00Z | 2023-07-24T16:05:19Z | 2023-07-24T15:56:10Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6064 | 2023-07-24T15:56:10Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6064 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6064). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,818,679,485 | 6,063 | Release: 2.14.0 | closed | true | 2023-07-24T15:41:19Z | 2023-07-24T16:05:16Z | 2023-07-24T15:47:51Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6063 | 2023-07-24T15:47:51Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6063 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,818,341,584 | 6,062 | Improve `Dataset.from_list` docstring | closed | true | 2023-07-24T12:36:38Z | 2023-07-24T14:43:48Z | 2023-07-24T14:34:43Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6062 | 2023-07-24T14:34:43Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6062 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,818,337,136 | 6,061 | Dill 3.7 support | closed | Adds support for dill 3.7. | true | 2023-07-24T12:33:58Z | 2023-07-24T14:13:20Z | 2023-07-24T14:04:36Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6061 | 2023-07-24T14:04:36Z | 5 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6061 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,816,614,120 | 6,060 | Dataset.map() execute twice when in PyTorch DDP mode | closed | ### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. W... | true | 2023-07-22T05:06:43Z | 2024-01-22T18:35:12Z | 2024-01-22T18:35:12Z | wanghaoyucn | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6060 | false | [
"Sorry for asking a duplicate question about `num_proc`, I searched the forum and find the solution.\r\n\r\nBut I still can't make the trick with `torch.distributed.barrier()` to only map at the main process work. The [post on forum]( https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or... |
1,816,537,176 | 6,059 | Provide ability to load label mappings from file | open | ### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be... | true | 2023-07-22T02:04:19Z | 2024-04-16T08:07:55Z | null | david-waterworth | NONE | null | null | 3 | 1 | 1 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6059 | false | [
"I would like this also as I have been working with a dataset with hierarchical classes. In fact, I encountered this very issue when trying to define the dataset with a script. I couldn't find a work around and reverted to hard coding the class names in the readme yaml.\r\n\r\n@david-waterworth do you envision also... |
1,815,131,397 | 6,058 | laion-coco download error | closed | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no... | true | 2023-07-21T04:24:15Z | 2023-07-22T01:42:06Z | 2023-07-22T01:42:06Z | yangyijune | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6058 | false | [
"This can also mean one of the files was not downloaded correctly.\r\n\r\nWe log an erroneous file's name before raising the reader's error, so this is how you can find the problematic file. Then, you should delete it and call `load_dataset` again.\r\n\r\n(I checked all the uploaded files, and they seem to be valid... |
1,815,100,151 | 6,057 | Why is the speed difference of gen example so big? | closed | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('tex... | true | 2023-07-21T03:34:49Z | 2023-10-04T18:06:16Z | 2023-10-04T18:06:15Z | pixeli99 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6057 | false | [
"Hi!\r\n\r\nIt's hard to explain this behavior without more information. Can you profile the slower version with the following code\r\n```python\r\nimport cProfile, pstats\r\nfrom datasets import load_dataset\r\n\r\nwith cProfile.Profile() as profiler:\r\n ds = load_dataset(...)\r\n\r\nstats = pstats.Stats(profi... |
1,815,086,963 | 6,056 | Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded | open | Context: issue #5990
In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get th... | true | 2023-07-21T03:13:21Z | 2023-08-17T08:26:53Z | null | AntreasAntoniou | NONE | https://github.com/huggingface/datasets/pull/6056 | null | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6056 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6056). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq Reading the filenames is something I tried earlier, but I decided to use the yaml direction because:\r\n\r\n1. The yaml file name is const... |
1,813,524,145 | 6,055 | Fix host URL in The Pile datasets | open | ### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSCo... | true | 2023-07-20T09:08:52Z | 2023-07-20T09:09:37Z | null | nickovchinnikov | NONE | null | null | 0 | 5 | 5 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6055 | false | [] |
1,813,271,304 | 6,054 | Multi-processed `Dataset.map` slows down a lot when `import torch` | closed | ### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows i... | true | 2023-07-20T06:36:14Z | 2023-07-21T15:19:37Z | 2023-07-21T15:19:37Z | ShinoharaHare | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"duplicate"
] | https://github.com/huggingface/datasets/issues/6054 | false | [
"A duplicate of https://github.com/huggingface/datasets/issues/5929"
] |
1,812,635,902 | 6,053 | Change package name from "datasets" to something less generic | closed | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have n... | true | 2023-07-19T19:53:28Z | 2024-11-20T21:22:36Z | 2023-10-03T16:04:09Z | jack-jjm | NONE | null | null | 2 | 7 | 7 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6053 | false | [
"This would break a lot of existing code, so we can't really do this.",
"I encountered this issue while working on a large project with 6+ years history. We have a submodule named datasets in the backend, and face a big challenge incorporating huggingface datasets into the project, especially considering django a... |
1,812,145,100 | 6,052 | Remove `HfFileSystem` and deprecate `S3FileSystem` | closed | Remove the legacy `HfFileSystem` and deprecate `S3FileSystem`
cc @philschmid for the SageMaker scripts/notebooks that still use `datasets`' `S3FileSystem` | true | 2023-07-19T15:00:01Z | 2023-07-19T17:39:11Z | 2023-07-19T17:27:17Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6052 | 2023-07-19T17:27:17Z | 10 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6052 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,811,549,650 | 6,051 | Skipping shard in the remote repo and resume upload | closed | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enume... | true | 2023-07-19T09:25:26Z | 2023-07-20T18:16:01Z | 2023-07-20T18:16:00Z | rs9000 | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6051 | false | [
"Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before ... |
1,810,378,706 | 6,049 | Update `ruff` version in pre-commit config | closed | so that it corresponds to the one that is being run in CI | true | 2023-07-18T17:13:50Z | 2023-12-01T14:26:19Z | 2023-12-01T14:26:19Z | polinaeterna | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6049 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6049 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6049). All of your documentation changes will be reflected on that endpoint.",
"I've updated the `ruff`'s pre-commit version as part of https://github.com/huggingface/datasets/pull/6434, so feel free to close this PR."
] |
1,809,629,346 | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | closed | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/... | true | 2023-07-18T10:16:34Z | 2023-07-18T16:18:39Z | 2023-07-18T16:18:39Z | yangy1992 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6048 | false | [
"The `audiofolder` loader is not available in version `2.3.2`, hence the error. Please run the `pip install -U datasets` command to update the `datasets` installation to make `load_dataset(\"audiofolder\", ...)` work."
] |
1,809,627,947 | 6,047 | Bump dev version | closed | workaround to fix an issue with transformers CI
https://github.com/huggingface/transformers/pull/24867#discussion_r1266519626 | true | 2023-07-18T10:15:39Z | 2023-07-18T10:28:01Z | 2023-07-18T10:15:52Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6047 | 2023-07-18T10:15:52Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6047 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6047). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,808,154,414 | 6,046 | Support proxy and user-agent in fsspec calls | open | Since we switched to the new HfFileSystem we no longer apply user's proxy and user-agent.
Using the HTTP_PROXY and HTTPS_PROXY environment variables works though since we use aiohttp to call the HF Hub.
This can be implemented in `_prepare_single_hop_path_and_storage_options`.
Though ideally the `HfFileSystem`... | true | 2023-07-17T16:39:26Z | 2025-06-21T14:06:31Z | null | lhoestq | MEMBER | null | null | 9 | 0 | 0 | 0 | null | false | [
"enhancement",
"good second issue"
] | https://github.com/huggingface/datasets/issues/6046 | false | [
"hii @lhoestq can you assign this issue to me?\r\n",
"You can reply \"#self-assign\" to this issue to automatically get assigned to it :)\r\nLet me know if you have any questions or if I can help",
"#2289 ",
"Actually i am quite new to figure it out how everything goes and done \r\n\r\n> You can reply \"#self... |
1,808,072,270 | 6,045 | Check if column names match in Parquet loader only when config `features` are specified | closed | Fix #6039 | true | 2023-07-17T15:50:15Z | 2023-07-24T14:45:56Z | 2023-07-24T14:35:03Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6045 | 2023-07-24T14:35:03Z | 8 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6045 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,808,057,906 | 6,044 | Rename "pattern" to "path" in YAML data_files configs | closed | To make it easier to understand for users.
They can use "path" to specify a single path, <s>or "paths" to use a list of paths.</s>
Glob patterns are still supported though | true | 2023-07-17T15:41:16Z | 2023-07-19T16:59:55Z | 2023-07-19T16:48:06Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6044 | 2023-07-19T16:48:06Z | 10 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6044 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,807,771,750 | 6,043 | Compression kwargs have no effect when saving datasets as csv | open | ### Describe the bug
Attempting to save a dataset as a compressed csv file, the compression kwargs provided to `.to_csv()` that get piped to panda's `pandas.DataFrame.to_csv` do not have any effect - resulting in the dataset not getting compressed.
A warning is raised if explicitly providing a `compression` kwarg, ... | true | 2023-07-17T13:19:21Z | 2023-07-22T17:34:18Z | null | exs-avianello | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6043 | false | [
"Hello @exs-avianello, I have reproduced the bug successfully and have understood the problem. But I am confused regarding this part of the statement, \"`pandas.DataFrame.to_csv` is always called with a buf-like `path_or_buf`\".\r\n\r\nCan you please elaborate on it?\r\n\r\nThanks!",
"Hi @aryanxk02 ! Sure, what I... |
1,807,516,762 | 6,042 | Fix unused DatasetInfosDict code in push_to_hub | closed | true | 2023-07-17T11:03:09Z | 2023-07-18T16:17:52Z | 2023-07-18T16:08:42Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6042 | 2023-07-18T16:08:42Z | 3 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6042 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,807,441,055 | 6,041 | Flatten repository_structure docs on yaml | closed | To have Splits, Configurations and Builder parameters at the same doc level | true | 2023-07-17T10:15:10Z | 2023-07-17T10:24:51Z | 2023-07-17T10:16:22Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6041 | 2023-07-17T10:16:22Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6041 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6041). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
1,807,410,238 | 6,040 | Fix legacy_dataset_infos | closed | was causing transformers CI to fail
https://circleci.com/gh/huggingface/transformers/855105 | true | 2023-07-17T09:56:21Z | 2023-07-17T10:24:34Z | 2023-07-17T10:16:03Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6040 | 2023-07-17T10:16:03Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6040 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,806,508,451 | 6,039 | Loading column subset from parquet file produces error since version 2.13 | closed | ### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in ... | true | 2023-07-16T09:13:07Z | 2023-07-24T14:35:04Z | 2023-07-24T14:35:04Z | kklemon | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6039 | false | [] |
1,805,960,244 | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | closed | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configurati... | true | 2023-07-15T07:58:08Z | 2023-07-24T11:54:15Z | 2023-07-24T11:54:15Z | BaiMeiyingxue | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6038 | false | [
"Instead of writing the loading script, you can use the built-in loader to [load JSON files](https://huggingface.co/docs/datasets/loading#json):\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"json\", data_files={\"train\": os.path.join(data_dir[\"train\"]), \"dev\": os.path.join(data_dir[\... |
1,805,887,184 | 6,037 | Documentation links to examples are broken | closed | ### Describe the bug
The links at the bottom of [add_dataset](https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html) to examples of specific datasets are all broken, for example
- text classification: [ag_news](https://github.com/huggingface/datasets/blob/master/datasets/ag_news/ag_news.py) (original data ... | true | 2023-07-15T04:54:50Z | 2023-07-17T22:35:14Z | 2023-07-17T15:10:32Z | david-waterworth | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6037 | false | [
"These docs are outdated (version 1.2.1 is over two years old). Please refer to [this](https://huggingface.co/docs/datasets/dataset_script) version instead.\r\n\r\nInitially, we hosted datasets in this repo, but now you can find them [on the HF Hub](https://huggingface.co/datasets) (e.g. the [`ag_news`](https://hug... |
1,805,138,898 | 6,036 | Deprecate search API | open | The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn... | true | 2023-07-14T16:22:09Z | 2023-09-07T16:44:32Z | null | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6036 | null | 9 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6036 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,805,087,687 | 6,035 | Dataset representation | open | __repr__ and _repr_html_ now both are similar to that of Polars | true | 2023-07-14T15:42:37Z | 2023-07-19T19:41:35Z | null | Ganryuu | NONE | https://github.com/huggingface/datasets/pull/6035 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6035 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6035). All of your documentation changes will be reflected on that endpoint."
] |
1,804,501,361 | 6,034 | load_dataset hangs on WSL | closed | ### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not... | true | 2023-07-14T09:03:10Z | 2023-07-14T14:48:29Z | 2023-07-14T14:48:29Z | Andy-Zhou2 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6034 | false | [
"Even if a dataset is cached, we still make requests to check whether the cache is up-to-date. [This](https://huggingface.co/docs/datasets/v2.13.1/en/loading#offline) section in the docs explains how to avoid them and directly load the cached version.",
"Thanks - that works! However it doesn't resolve the origina... |
1,804,482,051 | 6,033 | `map` function doesn't fully utilize `input_columns`. | closed | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select co... | true | 2023-07-14T08:49:28Z | 2023-07-14T09:16:04Z | 2023-07-14T09:16:04Z | kwonmha | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6033 | false | [] |
1,804,358,679 | 6,032 | DownloadConfig.proxies not work when load_dataset_builder calling HfApi.dataset_info | open | ### Describe the bug
```python
download_config = DownloadConfig(proxies={'https': '<my proxy>'})
builder = load_dataset_builder(..., download_config=download_config)
```
But, when getting the dataset_info from HfApi, the http requests not using the proxies.
### Steps to reproduce the bug
1. Setup proxies i... | true | 2023-07-14T07:22:55Z | 2023-09-11T13:50:41Z | null | codingl2k1 | NONE | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6032 | false | [
"`HfApi` comes from the `huggingface_hub` package. You can use [this](https://huggingface.co/docs/huggingface_hub/v0.16.3/en/package_reference/utilities#huggingface_hub.configure_http_backend) utility to change the `huggingface_hub`'s `Session` proxies (see the example).\r\n\r\nWe plan to implement https://github.c... |
1,804,183,858 | 6,031 | Argument type for map function changes when using `input_columns` for `IterableDataset` | closed | ### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and val... | true | 2023-07-14T05:11:14Z | 2023-07-14T14:44:15Z | 2023-07-14T14:44:15Z | kwonmha | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6031 | false | [
"Yes, this is intended."
] |
1,803,864,744 | 6,030 | fixed typo in comment | closed | This mistake was a bit confusing, so I thought it was worth sending a PR over. | true | 2023-07-13T22:49:57Z | 2023-07-14T14:21:58Z | 2023-07-14T14:13:38Z | NightMachinery | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6030 | 2023-07-14T14:13:38Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6030 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,803,460,046 | 6,029 | [docs] Fix link | closed | Fixes link to the builder classes :) | true | 2023-07-13T17:24:12Z | 2023-07-13T17:47:41Z | 2023-07-13T17:38:59Z | stevhliu | MEMBER | https://github.com/huggingface/datasets/pull/6029 | 2023-07-13T17:38:59Z | 3 | 1 | 0 | 1 | false | false | [] | https://github.com/huggingface/datasets/pull/6029 | true | [] |
1,803,294,981 | 6,028 | Use new hffs | closed | Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem.
Switching to `HfFileSystem` will help implementing optimization in data files resolution
## Implementation details
I replaced all the from_hf_repo and from_local_or_remote in data_files.p... | true | 2023-07-13T15:41:44Z | 2023-07-17T17:09:39Z | 2023-07-17T17:01:00Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6028 | 2023-07-17T17:01:00Z | 13 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6028 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,803,008,486 | 6,027 | Delete `task_templates` in `IterableDataset` when they are no longer valid | closed | Fix #6025 | true | 2023-07-13T13:16:17Z | 2023-07-13T14:06:20Z | 2023-07-13T13:57:35Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6027 | 2023-07-13T13:57:35Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6027 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,802,929,222 | 6,026 | Fix style with ruff 0.0.278 | closed | true | 2023-07-13T12:34:24Z | 2023-07-13T12:46:26Z | 2023-07-13T12:37:01Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6026 | 2023-07-13T12:37:01Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6026 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6026). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... | |
1,801,852,601 | 6,025 | Using a dataset for a use other than it was intended for. | closed | ### Describe the bug
Hi, I want to use the rotten tomatoes dataset but for a task other than classification, but when I interleave the dataset, it throws ```'ValueError: Column label is not present in features.'```. It seems that the label_col must be there in the dataset for some reason?
Here is the full stacktra... | true | 2023-07-12T22:33:17Z | 2023-07-13T13:57:36Z | 2023-07-13T13:57:36Z | surya-narayanan | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6025 | false | [
"I've opened a PR with a fix. In the meantime, you can avoid the error by deleting `task_templates` with `dataset.info.task_templates = None` before the `interleave_datasets` call.\r\n` "
] |
1,801,708,808 | 6,024 | Don't reference self in Spark._validate_cache_dir | closed | Fix for https://github.com/huggingface/datasets/issues/5963 | true | 2023-07-12T20:31:16Z | 2023-07-13T16:58:32Z | 2023-07-13T12:37:09Z | maddiedawson | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/6024 | 2023-07-13T12:37:09Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6024 | true | [
"Ptal @lhoestq :) I tested this manually on a multi-node Databricks cluster",
"Hm looks like the check_code_quality failures are unrelated to me change... https://github.com/huggingface/datasets/actions/runs/5536162850/jobs/10103451883?pr=6024",
"_The documentation is not available anymore as the PR was closed ... |
1,801,272,420 | 6,023 | Fix `ClassLabel` min max check for `None` values | closed | Fix #6022 | true | 2023-07-12T15:46:12Z | 2023-07-12T16:29:26Z | 2023-07-12T16:18:04Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6023 | 2023-07-12T16:18:04Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6023 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,800,092,589 | 6,022 | Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int' | closed | ### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
... | true | 2023-07-12T03:20:17Z | 2023-07-12T16:18:06Z | 2023-07-12T16:18:05Z | codingl2k1 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6022 | false | [
"Thanks for reporting! I've opened a PR with a fix."
] |
1,799,785,904 | 6,021 | [docs] Update return statement of index search | closed | Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https://github.com/huggingface/transformers/issues/24739) and internal Slack [convo](https://huggingface.slack.com/archives/C01229B19EX/p1689105179711689)), and fixes the formatting because multiple return ... | true | 2023-07-11T21:33:32Z | 2023-07-12T17:13:02Z | 2023-07-12T17:03:00Z | stevhliu | MEMBER | https://github.com/huggingface/datasets/pull/6021 | 2023-07-12T17:03:00Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6021 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,799,720,536 | 6,020 | Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs | open | ### Describe the bug
I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dat... | true | 2023-07-11T20:40:38Z | 2024-10-27T06:30:13Z | null | kheyer | NONE | null | null | 4 | 5 | 5 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6020 | false | [
"This scenario currently requires explicitly passing the target features (to avoid the error): \r\n```python\r\nimport datasets\r\n\r\n...\r\n\r\nfeatures = dataset.features\r\nfeatures[\"output\"] = = [{\"test\": datasets.Value(\"int64\")}]\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=... |
1,799,532,822 | 6,019 | Improve logging | closed | Adds the StreamHandler (as `hfh` and `transformers` do) to the library's logger to log INFO messages and logs the messages about "loading a cached result" (and some other warnings) as INFO
(Also removes the `leave=False` arg in the progress bars to be consistent with `hfh` and `transformers` - progress bars serve as... | true | 2023-07-11T18:30:23Z | 2023-07-12T19:34:14Z | 2023-07-12T17:19:28Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6019 | 2023-07-12T17:19:28Z | 13 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6019 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,799,411,999 | 6,018 | test1 | closed | true | 2023-07-11T17:25:49Z | 2023-07-20T10:11:41Z | 2023-07-20T10:11:41Z | ognjenovicj | NONE | https://github.com/huggingface/datasets/pull/6018 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6018 | true | [
"We no longer host datasets in this repo. You should use the HF Hub instead."
] | |
1,799,309,132 | 6,017 | Switch to huggingface_hub's HfFileSystem | closed | instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919 | true | 2023-07-11T16:24:40Z | 2023-07-17T17:01:01Z | 2023-07-17T17:01:01Z | lhoestq | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6017 | false | [] |
1,798,968,033 | 6,016 | Dataset string representation enhancement | open | my attempt at #6010
not sure if this is the right way to go about it, I will wait for your feedback | true | 2023-07-11T13:38:25Z | 2023-07-16T10:26:18Z | null | Ganryuu | NONE | https://github.com/huggingface/datasets/pull/6016 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6016 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6016). All of your documentation changes will be reflected on that endpoint.",
"It we could have something similar to Polars, that would be great.\r\n\r\nThis is what Polars outputs: \r\n* `__repr__`/`__str__` :\r\n```\r\nshape... |
1,798,807,893 | 6,015 | Add metadata ui screenshot in docs | closed | true | 2023-07-11T12:16:29Z | 2023-07-11T16:07:28Z | 2023-07-11T15:56:46Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/6015 | 2023-07-11T15:56:46Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6015 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | |
1,798,213,816 | 6,014 | Request to Share/Update Dataset Viewer Code | closed | Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kin... | true | 2023-07-11T06:36:09Z | 2024-07-20T07:29:08Z | 2023-09-25T12:01:17Z | lilyorlilypad | NONE | null | null | 10 | 0 | 0 | 0 | null | false | [
"duplicate"
] | https://github.com/huggingface/datasets/issues/6014 | false | [
"Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?",
"I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/da... |
1,796,083,437 | 6,013 | [FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage | open | ### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets wou... | true | 2023-07-10T06:42:20Z | 2025-06-19T06:30:38Z | null | NightMachinery | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement",
"good second issue"
] | https://github.com/huggingface/datasets/issues/6013 | false | [
"You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_c... |
1,795,575,432 | 6,012 | [FR] Transform Chaining, Lazy Mapping | open | ### Feature request
Currently using a `map` call processes and duplicates the whole dataset, which takes both time and disk space.
The solution is to allow lazy mapping, which is essentially a saved chain of transforms that are applied on the fly whenever a slice of the dataset is requested.
The API should look ... | true | 2023-07-09T21:40:21Z | 2025-01-20T14:06:28Z | null | NightMachinery | CONTRIBUTOR | null | null | 9 | 6 | 6 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6012 | false | [
"You can use `with_transform` to get a new dataset object.\r\n\r\nSupport for lazy `map` has already been discussed [here](https://github.com/huggingface/datasets/issues/3385) a little bit. Personally, I'm not a fan, as this would make `map` even more complex. ",
"> You can use `with_transform` to get a new datas... |
1,795,296,568 | 6,011 | Documentation: wiki_dpr Dataset has no metric_type for Faiss Index | closed | ### Describe the bug
After loading `wiki_dpr` using:
```py
ds = load_dataset(path='wiki_dpr', name='psgs_w100.multiset.compressed', split='train')
print(ds.get_index("embeddings").metric_type) # prints nothing because the value is None
```
the index does not have a defined `metric_type`. This is an issue because ... | true | 2023-07-09T08:30:19Z | 2023-07-11T03:02:36Z | 2023-07-11T03:02:36Z | YichiRockyZhang | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6011 | false | [
"Hi! You can do `ds.get_index(\"embeddings\").faiss_index.metric_type` to get the metric type and then match the result with the FAISS metric [enum](https://github.com/facebookresearch/faiss/blob/43d86e30736ede853c384b24667fc3ab897d6ba9/faiss/MetricType.h#L22-L36) (should be L2).",
"Ah! Thank you for pointing thi... |
1,793,838,152 | 6,010 | Improve `Dataset`'s string representation | open | Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit. | true | 2023-07-07T16:38:03Z | 2023-09-01T03:45:07Z | null | mariosasko | COLLABORATOR | null | null | 3 | 1 | 1 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/6010 | false | [
"I want to take a shot at this if possible ",
"Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`/`_repr_html_` implementations for some pointers/ideas.",
"@mariosasko are there any other similar issues that I could work on? I see this has been alr... |
1,792,059,808 | 6,009 | Fix cast for dictionaries with no keys | closed | Fix #5677 | true | 2023-07-06T18:48:14Z | 2023-07-07T14:13:00Z | 2023-07-07T14:01:13Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6009 | 2023-07-07T14:01:13Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6009 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,789,869,344 | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | closed | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more ... | true | 2023-07-05T16:06:48Z | 2023-07-10T13:46:39Z | 2023-07-10T13:46:39Z | andreemic | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6008 | false | [
"By default, we write data to disk (so it can be memory-mapped) every 1000 rows/samples. You can control this with the `writer_batch_size` parameter. Also, when working with fixed-size arrays, the `ArrayXD` feature types yield better performance (e.g., in your case, `features=datasets.Features({\"i\": datasets.Arra... |
1,789,782,693 | 6,007 | Get an error "OverflowError: Python int too large to convert to C long" when loading a large dataset | open | ### Describe the bug
When load a large dataset with the following code
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'news_peoples_daily', split='train')
```
We encountered the error: "OverflowError: Python int too large to convert to C long"
The error look something like... | true | 2023-07-05T15:16:50Z | 2024-02-07T22:22:35Z | null | silverriver | CONTRIBUTOR | null | null | 8 | 0 | 0 | 0 | null | false | [
"arrow"
] | https://github.com/huggingface/datasets/issues/6007 | false | [
"This error means that one of the int32 (`Value(\"int32\")`) columns in the dataset has a value that is out of the valid (int32) range.\r\n\r\nI'll open a PR to print the name of a problematic column to make debugging such errors easier.",
"I am afraid int32 is not the reason for this error.\r\n\r\nI have submitt... |
1,788,855,582 | 6,006 | NotADirectoryError when loading gigawords | closed | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): ... | true | 2023-07-05T06:23:41Z | 2023-07-05T06:31:02Z | 2023-07-05T06:31:01Z | xipq | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6006 | false | [
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] |
1,788,103,576 | 6,005 | Drop Python 3.7 support | closed | `hfh` and `transformers` have dropped Python 3.7 support, so we should do the same :).
(Based on the stats, it seems less than 10% of the users use `datasets` with Python 3.7) | true | 2023-07-04T15:02:37Z | 2023-07-06T15:32:41Z | 2023-07-06T15:22:43Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6005 | 2023-07-06T15:22:43Z | 7 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6005 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,786,636,368 | 6,004 | Misc improvements | closed | Contains the following improvements:
* fixes a "share dataset" link in README and modifies the "hosting" part in the disclaimer section
* updates `Makefile` to also run the style checks on `utils` and `setup.py`
* deletes a test for GH-hosted datasets (no longer supported)
* deletes `convert_dataset.sh` (outdated... | true | 2023-07-03T18:29:14Z | 2023-07-06T17:04:11Z | 2023-07-06T16:55:25Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6004 | 2023-07-06T16:55:25Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6004 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,786,554,110 | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | open | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
... | true | 2023-07-03T17:15:31Z | 2023-07-03T17:15:31Z | null | PonteIneptique | NONE | null | null | 0 | 1 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/6003 | false | [] |
1,786,053,060 | 6,002 | Add KLUE-MRC metrics | closed | ## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension)
Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue).
KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.
Specifically, in the case of... | true | 2023-07-03T12:11:10Z | 2023-07-09T11:57:20Z | 2023-07-09T11:57:20Z | ingyuseong | NONE | https://github.com/huggingface/datasets/pull/6002 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6002 | true | [
"The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https://huggingface.co/docs/evaluate/creating_and_sharing)."
] |
1,782,516,627 | 6,001 | Align `column_names` type check with type hint in `sort` | closed | Fix #5998 | true | 2023-06-30T13:15:50Z | 2023-06-30T14:18:32Z | 2023-06-30T14:11:24Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6001 | 2023-06-30T14:11:24Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6001 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,782,456,878 | 6,000 | Pin `joblib` to avoid `joblibspark` test failures | closed | `joblibspark` doesn't support the latest `joblib` release.
See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors | true | 2023-06-30T12:36:54Z | 2023-06-30T13:17:05Z | 2023-06-30T13:08:27Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/6000 | 2023-06-30T13:08:27Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/6000 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,781,851,513 | 5,999 | Getting a 409 error while loading xglue dataset | closed | ### Describe the bug
Unable to load xglue dataset
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("xglue", "ntg")
```
> ConnectionError: Couldn't reach https://xglue.blob.core.windows.net/xglue/xglue_full_dataset.tar.gz (error 409)
### Expected behavior
Expected the... | true | 2023-06-30T04:13:54Z | 2023-06-30T05:57:23Z | 2023-06-30T05:57:22Z | Praful932 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/5999 | false | [
"Thanks for reporting, @Praful932.\r\n\r\nLet's continue the conversation on the Hub: https://huggingface.co/datasets/xglue/discussions/5"
] |
1,781,805,018 | 5,998 | The current implementation has a potential bug in the sort method | closed | ### Describe the bug
In the sort method,here's a piece of code
```python
# column_names: Union[str, Sequence_[str]]
# Check proper format of and for duplicates in column_names
if not isinstance(column_names, list):
column_names = [column_names]
```
I get an error when I pass in a tuple based on the ... | true | 2023-06-30T03:16:57Z | 2023-06-30T14:21:03Z | 2023-06-30T14:11:25Z | wangyuxinwhy | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/5998 | false | [
"Thanks for reporting, @wangyuxinwhy. "
] |
1,781,582,818 | 5,997 | extend the map function so it can wrap around long text that does not fit in the context window | open | ### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's con... | true | 2023-06-29T22:15:21Z | 2023-07-03T17:58:52Z | null | siddhsql | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/5997 | false | [
"I just noticed the [docs](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number... |
1,779,294,374 | 5,996 | Deprecate `use_auth_token` in favor of `token` | closed | ... to be consistent with `transformers` and `huggingface_hub`. | true | 2023-06-28T16:26:38Z | 2023-07-05T15:22:20Z | 2023-07-03T16:03:33Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/5996 | 2023-07-03T16:03:33Z | 9 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/5996 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,777,088,925 | 5,995 | Support returning dataframe in map transform | closed | Allow returning Pandas DataFrames in `map` transforms.
(Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row) | true | 2023-06-27T14:15:08Z | 2023-06-28T13:56:02Z | 2023-06-28T13:46:33Z | mariosasko | COLLABORATOR | https://github.com/huggingface/datasets/pull/5995 | 2023-06-28T13:46:33Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/5995 | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
1,776,829,004 | 5,994 | Fix select_columns columns order | closed | Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`.
I also fixed the same issue for `dataset.flatten()`
Close https://github.com/huggingface/datasets/issues/5993 | true | 2023-06-27T12:32:46Z | 2023-06-27T15:40:47Z | 2023-06-27T15:32:43Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/5994 | 2023-06-27T15:32:43Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/5994 | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
1,776,643,555 | 5,993 | ValueError: Table schema does not match schema used to create file | closed | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset... | true | 2023-06-27T10:54:07Z | 2023-06-27T15:36:42Z | 2023-06-27T15:32:44Z | exs-avianello | NONE | null | null | 2 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/5993 | false | [
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! 🚀 "
] |
1,776,460,964 | 5,992 | speedup | closed | true | 2023-06-27T09:17:58Z | 2023-06-27T09:23:07Z | 2023-06-27T09:18:04Z | qgallouedec | MEMBER | https://github.com/huggingface/datasets/pull/5992 | null | 1 | 0 | 0 | 0 | true | false | [] | https://github.com/huggingface/datasets/pull/5992 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5992). All of your documentation changes will be reflected on that endpoint."
] | |
1,774,456,518 | 5,991 | `map` with any joblib backend | open | We recently enabled the (experimental) parallel backend switch for data download and extraction but not for `map` yet.
Right now we're using our `iflatmap_unordered` implementation for multiprocessing that uses a shared Queue to gather progress updates from the subprocesses and show a progress bar in the main proces... | true | 2023-06-26T10:33:42Z | 2023-06-26T10:33:42Z | null | lhoestq | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/5991 | false | [] |
1,774,134,091 | 5,989 | Set a rule on the config and split names | open | > should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853 | true | 2023-06-26T07:34:14Z | 2023-07-19T14:22:54Z | null | severo | COLLABORATOR | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/5989 | false | [
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?",
"See a report where the datasets server fails: https://huggingface.co/datasets/poloclub/dif... |
1,773,257,828 | 5,988 | ConnectionError: Couldn't reach dataset_infos.json | closed | ### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'C... | true | 2023-06-25T12:39:31Z | 2023-07-07T13:20:57Z | 2023-07-07T13:20:57Z | yulingao | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/5988 | false | [
"Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provid... |
1,773,047,909 | 5,987 | Why max_shard_size is not supported in load_dataset and passed to download_and_prepare | closed | ### Describe the bug
https://github.com/huggingface/datasets/blob/a8a797cc92e860c8d0df71e0aa826f4d2690713e/src/datasets/load.py#L1809
What I can to is break the `load_dataset` and use `load_datset_builder` + `download_and_prepare` instead.
### Steps to reproduce the bug
https://github.com/huggingface/datasets/blo... | true | 2023-06-25T04:19:13Z | 2023-06-29T16:06:08Z | 2023-06-29T16:06:08Z | npuichigo | CONTRIBUTOR | null | null | 5 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/5987 | false | [
"Can you explain your use case for `max_shard_size`? \r\n\r\nOn some systems, there is a limit to the size of a memory-mapped file, so we could consider exposing this parameter in `load_dataset`.",
"In my use case, users may choose a proper size to balance the cost and benefit of using large shard size. (On azure... |
1,772,233,111 | 5,986 | Make IterableDataset.from_spark more efficient | closed | Moved the code from using collect() to using toLocalIterator, which allows for prefetching partitions that will be selected next, thus allowing for better performance when iterating. | true | 2023-06-23T22:18:20Z | 2023-07-07T10:05:58Z | 2023-07-07T09:56:09Z | mathewjacob1002 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/5986 | 2023-07-07T09:56:09Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/5986 | true | [
"@lhoestq would you be able to review this please and also approve the workflow?",
"Sounds good to me :) feel free to run `make style` to apply code formatting",
"_The documentation is not available anymore as the PR was closed or merged._",
"cool ! I think we can merge once all comments have been addressed",... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.