id int64 599M 3.18B | number int64 1 7.65k | title stringlengths 1 290 | state stringclasses 2
values | body stringlengths 0 228k | is_pull_request bool 1
class | created_at stringdate 2020-04-14 10:18:02 2025-06-26 12:23:48 | updated_at stringdate 2020-04-27 16:04:17 2025-06-26 14:02:38 | closed_at stringlengths 20 20 ⌀ | user_login stringlengths 3 26 | author_association stringclasses 4
values | pr_url stringlengths 46 49 ⌀ | pr_merged_at stringlengths 20 20 ⌀ | comments_count int64 0 70 | reactions_total int64 0 61 | reactions_plus1 int64 0 39 | reactions_heart int64 0 22 | draft bool 2
classes | locked bool 1
class | labels listlengths 0 4 | html_url stringlengths 46 51 | is_pr_url bool 2
classes | comments listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,583,233,980 | 7,224 | fallback to default feature casting in case custom features not available during dataset loading | open | a fix for #7223 in case datasets is happy to support this kind of extensibility! seems cool / powerful for allowing sharing of datasets with potentially different feature types | true | 2024-10-12T16:13:56Z | 2024-10-12T16:13:56Z | null | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7224 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7224 | true | [] |
2,583,231,590 | 7,223 | Fallback to arrow defaults when loading dataset with custom features that aren't registered locally | open | ### Describe the bug
Datasets allows users to create and register custom features.
However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message.
It would be nice to offer a fall... | true | 2024-10-12T16:08:20Z | 2024-10-12T16:08:20Z | null | alex-hh | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7223 | false | [] |
2,582,678,033 | 7,222 | TypeError: Couldn't cast array of type string to null in long json | open | ### Describe the bug
In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation.
However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk.
Consequently, if al... | true | 2024-10-12T08:14:59Z | 2025-02-23T13:01:47Z | null | nokados | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7222 | false | [
"I am encountering this same issue. It seems that the library manages to recognise an optional column (but not **exclusively** null) if there is at least one non-null instance within the same file. For example, given a `test_0.jsonl` file:\r\n```json\r\n{\"a\": \"a1\", \"b\": \"b1\", \"c\": null, \"d\": null}\r\n{\... |
2,582,114,631 | 7,221 | add CustomFeature base class to support user-defined features with encoding/decoding logic | closed | intended as fix for #7220 if this kind of extensibility is something that datasets is willing to support!
```python
from datasets.features.features import CustomFeature
class ListOfStrs(CustomFeature):
requires_encoding = True
def _encode_example(self, value):
if isinstance(value, str):
... | true | 2024-10-11T20:10:27Z | 2025-01-28T09:40:29Z | 2025-01-28T09:40:29Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7221 | null | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7221 | true | [
"@lhoestq would you be open to supporting this kind of extensibility?",
"I suggested a fix in https://github.com/huggingface/datasets/issues/7220 that would not necessarily require a parent class for custom features, lmk what you think"
] |
2,582,036,110 | 7,220 | Custom features not compatible with special encoding/decoding logic | open | ### Describe the bug
It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727)
However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encod... | true | 2024-10-11T19:20:11Z | 2024-11-08T15:10:58Z | null | alex-hh | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7220 | false | [
"I think you can fix this simply by replacing the line with hardcoded features with `hastattr(schema, \"encode_example\")` actually",
"#7284 "
] |
2,581,708,084 | 7,219 | bump fsspec | closed | true | 2024-10-11T15:56:36Z | 2024-10-14T08:21:56Z | 2024-10-14T08:21:55Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7219 | 2024-10-14T08:21:55Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7219 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7219). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,581,095,098 | 7,217 | ds.map(f, num_proc=10) is slower than df.apply | open | ### Describe the bug
pandas columns: song_id, song_name
ds = Dataset.from_pandas(df)
def has_cover(song_name):
if song_name is None or pd.isna(song_name):
return False
return 'cover' in song_name.lower()
df['has_cover'] = df.song_name.progress_apply(has_cover)
ds = ds.map(lambda x: {'has_cov... | true | 2024-10-11T11:04:05Z | 2025-02-28T21:21:01Z | null | lanlanlanlanlanlan365 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7217 | false | [
"Hi ! `map()` reads all the columns and writes the resulting dataset with all the columns as well, while df.column_name.apply only reads and writes one column and does it in memory. So this is speed difference is actually expected.\r\n\r\nMoreover using multiprocessing on a dataset that lives in memory (from_pandas... |
2,579,942,939 | 7,215 | Iterable dataset map with explicit features causes slowdown for Sequence features | open | ### Describe the bug
When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets.
However, this can cause a major slowdown for certain types of array features due to the features being re-encoded.
This is separate to the slowdown reported i... | true | 2024-10-10T22:08:20Z | 2024-10-10T22:10:32Z | null | alex-hh | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7215 | false | [] |
2,578,743,713 | 7,214 | Formatted map + with_format(None) changes array dtype for iterable datasets | open | ### Describe the bug
When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed
### Steps to reproduce the bug
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")})
dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10... | true | 2024-10-10T12:45:16Z | 2024-10-12T16:55:57Z | null | alex-hh | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7214 | false | [
"possibly due to this logic:\r\n\r\n```python\r\n def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:\r\n if isinstance(pa_array, pa.ChunkedArray):\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-... |
2,578,675,565 | 7,213 | Add with_rank to Dataset.from_generator | open | ### Feature request
Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`.
### Motivation
As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `ge... | true | 2024-10-10T12:15:29Z | 2024-10-10T12:17:11Z | null | muthissar | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7213 | false | [] |
2,578,641,259 | 7,212 | Windows do not supprot signal.alarm and singal.signal | open | ### Describe the bug
signal.alarm and signal.signal are used in the load.py module, but these are not supported by Windows.
### Steps to reproduce the bug
lighteval accelerate --model_args "pretrained=gpt2,trust_remote_code=True" --tasks "community|kinit_sts" --custom_tasks "community_tasks/kinit_evals.py" --output... | true | 2024-10-10T12:00:19Z | 2024-10-10T12:00:19Z | null | TomasJavurek | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7212 | false | [] |
2,576,400,502 | 7,211 | Describe only selected fields in README | open | ### Feature request
Hi Datasets team!
Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some f... | true | 2024-10-09T16:25:47Z | 2024-10-09T16:25:47Z | null | alozowski | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7211 | false | [] |
2,575,883,939 | 7,210 | Convert Array features to numpy arrays rather than lists by default | open | ### Feature request
It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting.
Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays... | true | 2024-10-09T13:05:21Z | 2024-10-09T13:05:21Z | null | alex-hh | CONTRIBUTOR | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7210 | false | [] |
2,575,526,651 | 7,209 | Preserve features in iterable dataset.filter | closed | Fixes example in #7208 - I'm not sure what other checks I should do? @lhoestq
I also haven't thought hard about the concatenate / interleaving example iterables but think this might work assuming that features are either all identical or None? | true | 2024-10-09T10:42:05Z | 2024-10-16T11:27:22Z | 2024-10-09T16:04:07Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7209 | 2024-10-09T16:04:07Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7209 | true | [
"Yes your assumption on concatenate/interleave is ok imo.\r\n\r\nIt seems the TypedExamplesIterable can slow down things, it should take formatting into account to not convert numpy arrays to python lists\r\n\r\nright now it's slow (unrelatedly to your PR):\r\n\r\n```python\r\n>>> ds = Dataset.from_dict({\"a\": np.... |
2,575,484,256 | 7,208 | Iterable dataset.filter should not override features | closed | ### Describe the bug
When calling filter on an iterable dataset, the features get set to None
### Steps to reproduce the bug
import numpy as np
import time
from datasets import Dataset, Features, Array3D
```python
features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,... | true | 2024-10-09T10:23:45Z | 2024-10-09T16:08:46Z | 2024-10-09T16:08:45Z | alex-hh | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7208 | false | [
"closed by https://github.com/huggingface/datasets/pull/7209, thanks @alex-hh !"
] |
2,573,582,335 | 7,207 | apply formatting after iter_arrow to speed up format -> map, filter for iterable datasets | closed | I got to this by hacking around a bit but it seems to solve #7206
I have no idea if this approach makes sense or would break something else?
Could maybe work on a full pr if this looks reasonable @lhoestq ? I imagine the same issue might affect other iterable dataset methods? | true | 2024-10-08T15:44:53Z | 2025-01-14T18:36:03Z | 2025-01-14T16:59:30Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7207 | 2025-01-14T16:59:30Z | 17 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7207 | true | [
"I think the problem is that the underlying ex_iterable will not use iter_arrow unless the formatting type is arrow, which leads to conversion from arrow -> python -> numpy in this case rather than arrow -> numpy.\r\n\r\nIdea of updated fix is to use the ex_iterable's iter_arrow in any case where it's available and... |
2,573,567,467 | 7,206 | Slow iteration for iterable dataset with numpy formatting for array data | open | ### Describe the bug
When working with large arrays, setting with_format to e.g. numpy then applying map causes a significant slowdown for iterable datasets.
### Steps to reproduce the bug
```python
import numpy as np
import time
from datasets import Dataset, Features, Array3D
features=Features(**{"array... | true | 2024-10-08T15:38:11Z | 2024-10-17T17:14:52Z | null | alex-hh | CONTRIBUTOR | null | null | 1 | 1 | 1 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7206 | false | [
"The below easily eats up 32G of RAM. Leaving it for a while bricked the laptop with 16GB.\r\n\r\n```\r\ndataset = load_dataset(\"Voxel51/OxfordFlowers102\", data_dir=\"data\").with_format(\"numpy\")\r\nprocessed_dataset = dataset.map(lambda x: x)\r\n```\r\n\r\n. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,573,289,063 | 7,204 | fix unbatched arrow map for iterable datasets | closed | Fixes the bug when applying map to an arrow-formatted iterable dataset described here:
https://github.com/huggingface/datasets/issues/6833#issuecomment-2399903885
```python
from datasets import load_dataset
ds = load_dataset("rotten_tomatoes", split="train", streaming=True)
ds = ds.with_format("arrow").map(l... | true | 2024-10-08T13:54:09Z | 2024-10-08T14:19:47Z | 2024-10-08T14:19:47Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7204 | 2024-10-08T14:19:46Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7204 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7204). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,573,154,222 | 7,203 | with_format docstring | closed | reported at https://github.com/huggingface/datasets/issues/3444 | true | 2024-10-08T13:05:19Z | 2024-10-08T13:13:12Z | 2024-10-08T13:13:05Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7203 | 2024-10-08T13:13:05Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7203 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7203). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,572,583,798 | 7,202 | `from_parquet` return type annotation | open | ### Describe the bug
As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor.
Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information.
###... | true | 2024-10-08T09:08:10Z | 2024-10-08T09:08:10Z | null | saiden89 | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7202 | false | [] |
2,569,837,015 | 7,201 | `load_dataset()` of images from a single directory where `train.png` image exists | open | ### Describe the bug
Hey!
Firstly, thanks for maintaining such framework!
I had a small issue, where I wanted to load a custom dataset of image+text captioning. I had all of my images in a single directory, and one of the images had the name `train.png`. Then, the loaded dataset had only this image.
I guess it'... | true | 2024-10-07T09:14:17Z | 2024-10-07T09:14:17Z | null | SagiPolaczek | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7201 | false | [] |
2,567,921,694 | 7,200 | Fix the environment variable for huggingface cache | closed | Resolve #6256. As far as I tested, `HF_DATASETS_CACHE` was ignored and I could not specify the cache directory at all except for the default one by this environment variable. `HF_HOME` has worked. Perhaps the recent change on file downloading by `huggingface_hub` could affect this bug.
In my testing, I could not sp... | true | 2024-10-05T11:54:35Z | 2024-10-30T23:10:27Z | 2024-10-08T15:45:18Z | torotoki | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7200 | 2024-10-08T15:45:17Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7200 | true | [
"Hi ! yes now `datasets` uses `huggingface_hub` to download and cache files from the HF Hub so you need to use `HF_HOME` (or manually `HF_HUB_CACHE` and `HF_DATASETS_CACHE` if you want to separate HF Hub cached files and cached datasets Arrow files)\r\n\r\nSo in your change I guess it needs to be `HF_HOME` instead ... |
2,566,788,225 | 7,199 | Add with_rank to Dataset.from_generator | open | Adds `with_rank` to `Dataset.from_generator`. As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU. | true | 2024-10-04T16:51:53Z | 2024-10-04T16:51:53Z | null | muthissar | NONE | https://github.com/huggingface/datasets/pull/7199 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7199 | true | [] |
2,566,064,849 | 7,198 | Add repeat method to datasets | closed | Following up on discussion in #6623 and #7198 I thought this would be pretty useful for my case so had a go at implementing.
My main motivation is to be able to call iterable_dataset.repeat(None).take(samples_per_epoch) to safely avoid timeout issues in a distributed training setting. This would provide a straightfo... | true | 2024-10-04T10:45:16Z | 2025-02-05T16:32:31Z | 2025-02-05T16:32:31Z | alex-hh | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7198 | 2025-02-05T16:32:31Z | 4 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7198 | true | [
"@lhoestq does this look reasonable?",
"updated and added test cases!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7198). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"thanks for ... |
2,565,924,788 | 7,197 | ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事 | open | ### Describe the bug
from datasets import load_dataset
print("11")
traindata = load_dataset('ptb_text_only', 'penn_treebank', split='train')
print("22")
valdata = load_dataset('ptb_text_only',
'penn_treebank',
split='validation')
### Steps to reproduce the b... | true | 2024-10-04T09:33:25Z | 2025-02-26T02:26:16Z | null | Mrgengli | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7197 | false | [
"Also cant download \"allenai/c4\", but with different error reported:\r\n```\r\nTraceback (most recent call last): ... |
2,564,218,566 | 7,196 | concatenate_datasets does not preserve shuffling state | open | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed th... | true | 2024-10-03T14:30:38Z | 2025-03-18T10:56:47Z | null | alex-hh | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7196 | false | [
"It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`"
] |
2,564,070,809 | 7,195 | Add support for 3D datasets | open | See https://huggingface.co/datasets/allenai/objaverse for example | true | 2024-10-03T13:27:44Z | 2024-10-04T09:23:36Z | null | severo | COLLABORATOR | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7195 | false | [
"maybe related: https://github.com/huggingface/datasets/issues/6388",
"Also look at https://github.com/huggingface/dataset-viewer/blob/f5fd117ceded990a7766e705bba1203fa907d6ad/services/worker/src/worker/job_runners/dataset/modalities.py#L241 which lists the 3D file formats that will assign the 3D modality to a da... |
2,563,364,199 | 7,194 | datasets.exceptions.DatasetNotFoundError for private dataset | closed | ### Describe the bug
The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work.
``` py
fro... | true | 2024-10-03T07:49:36Z | 2024-10-03T10:09:28Z | 2024-10-03T10:09:28Z | kdutia | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7194 | false | [
"Actually there is no such dataset available, that is why you are getting that error.",
"Fixed with @kdutia in Slack chat. Generating a new token fixed this issue. "
] |
2,562,392,887 | 7,193 | Support of num_workers (multiprocessing) in map for IterableDataset | open | ### Feature request
Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset.... | true | 2024-10-02T18:34:04Z | 2024-10-03T09:54:15Z | null | getao | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7193 | false | [
"I was curious about the same - since map is applied on the fly I was assuming that setting num_workers>1 in DataLoader would effectively do the map in parallel, have you tried that?"
] |
2,562,289,642 | 7,192 | Add repeat() for iterable datasets | closed | ### Feature request
It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user.
An IterableDataset.repeat(n) function could do this automatically
### Motivation
This feature was discussed in this iss... | true | 2024-10-02T17:48:13Z | 2025-03-18T10:48:33Z | 2025-03-18T10:48:32Z | alex-hh | CONTRIBUTOR | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7192 | false | [
"perhaps concatenate_datasets can already be used to achieve almost the same effect? ",
"`concatenate_datasets` does the job when there is a finite number of repetitions, but in case of `.repeat()` forever we need a new logic in `iterable_dataset.py`",
"done in https://github.com/huggingface/datasets/pull/7198"... |
2,562,206,949 | 7,191 | Solution to issue: #7080 Modified load_dataset function, so that it prompts the user to select a dataset when subdatasets or splits (train, test) are available | closed | # Feel free to give suggestions please..
### This PR is raised because of issue: https://github.com/huggingface/datasets/issues/7080

### This PR gives solution to https://github.com/huggingface/datasets/issues/7080
1. ... | true | 2024-10-02T17:02:45Z | 2024-11-10T08:48:21Z | 2024-11-10T08:48:21Z | negativenagesh | NONE | https://github.com/huggingface/datasets/pull/7191 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7191 | true | [
"I think the approach presented in https://github.com/huggingface/datasets/pull/6832 is the one we'll be taking.\r\n\r\nAsking user input is not a good idea since `load_dataset` is used a lot in server that don't have someone in front of them to select a split"
] |
2,562,162,725 | 7,190 | Datasets conflicts with fsspec 2024.9 | open | ### Describe the bug
Installing both in latest versions are not possible
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
But using older version of datasets is ok
`pip install "datasets==1.24.4" "fsspec==2024.9.0"`
### Steps to reproduce the bug
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
#... | true | 2024-10-02T16:43:46Z | 2024-10-10T07:33:18Z | null | cw-igormorgado | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7190 | false | [
"Yes, I need to use the latest version of fsspec and datasets for my usecase. \r\nhttps://github.com/fsspec/s3fs/pull/888#issuecomment-2404204606\r\nhttps://github.com/apache/arrow/issues/34363#issuecomment-2403553473\r\n\r\nlast version where things install without conflict is: 2.14.4\r\n\r\nSo this issue starts f... |
2,562,152,845 | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | open | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](... | true | 2024-10-02T16:38:38Z | 2024-10-02T17:01:40Z | null | Lauler | NONE | null | null | 0 | 1 | 1 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7189 | false | [] |
2,560,712,689 | 7,188 | Pin multiprocess<0.70.1 to align with dill<0.3.9 | closed | Pin multiprocess<0.70.1 to align with dill<0.3.9.
Note that multiprocess-0.70.1 requires dill-0.3.9: https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17
Fix #7186. | true | 2024-10-02T05:40:18Z | 2024-10-02T06:08:25Z | 2024-10-02T06:08:23Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7188 | 2024-10-02T06:08:23Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7188 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7188). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,560,501,308 | 7,187 | shard_data_sources() got an unexpected keyword argument 'worker_id' | open | ### Describe the bug
```
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 238, in __iter__
[rank0]: for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None):
[rank0]: File "/home/qinghao/miniconda3/en... | true | 2024-10-02T01:26:35Z | 2024-10-02T01:26:35Z | null | Qinghao-Hu | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7187 | false | [] |
2,560,323,917 | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | closed | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multi... | true | 2024-10-01T22:29:32Z | 2024-10-02T06:08:24Z | 2024-10-02T06:08:24Z | shubhbapna | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7186 | false | [] |
2,558,508,748 | 7,185 | CI benchmarks are broken | closed | Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975
```
{"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n... | true | 2024-10-01T08:16:08Z | 2024-10-09T16:07:48Z | 2024-10-09T16:07:48Z | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"maintenance"
] | https://github.com/huggingface/datasets/issues/7185 | false | [
"Fixed by #7205"
] |
2,556,855,150 | 7,184 | Pin dill<0.3.9 to fix CI | closed | Pin dill<0.3.9 to fix CI for deps-latest.
Note that dill-0.3.9 was released yesterday Sep 29, 2024:
- https://pypi.org/project/dill/0.3.9/
- https://github.com/uqfoundation/dill/releases/tag/0.3.9
Fix #7183. | true | 2024-09-30T14:26:25Z | 2024-09-30T14:38:59Z | 2024-09-30T14:38:57Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7184 | 2024-09-30T14:38:57Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7184 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7184). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,556,789,055 | 7,183 | CI is broken for deps-latest | closed | See: https://github.com/huggingface/datasets/actions/runs/11106149906/job/30853879890
```
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk - AssertionError: Lists differ: [{'fi[44 chars] {'filename': '/... | true | 2024-09-30T14:02:07Z | 2024-09-30T14:38:58Z | 2024-09-30T14:38:58Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7183 | false | [] |
2,556,333,671 | 7,182 | Support features in metadata configs | closed | Support features in metadata configs, like:
```
configs:
- config_name: default
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: score
dtype: float64
```
This will allow to avoid inference of data types.
Currently, we allow passing th... | true | 2024-09-30T11:14:53Z | 2024-10-09T16:03:57Z | 2024-10-09T16:03:54Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7182 | 2024-10-09T16:03:54Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7182 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7182). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI issue is unrelated:\r\n- #7183"
] |
2,554,917,019 | 7,181 | Fix datasets export to JSON | closed | true | 2024-09-29T12:45:20Z | 2024-11-01T11:55:36Z | 2024-11-01T11:55:36Z | varadhbhatnagar | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7181 | null | 8 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7181 | true | [
"Linked Issue: #7037\r\nIdeas: #7039 ",
"@albertvillanova / @lhoestq any early feedback?\r\n\r\nAFAIK there is no param `orient` in `load_dataset()`. So for orientations other than \"records\", the loading isn't very accurate. Any thoughts?",
"`orient = \"split\"` can also be handled. I will add the changes soo... | |
2,554,244,750 | 7,180 | Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion | closed | ### Describe the bug
I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.
### Steps to reproduce the bug
Steps to reproduce:
Create a PyTorch Dataset wrapper f... | true | 2024-09-28T14:00:47Z | 2024-09-30T12:07:56Z | 2024-09-30T12:07:56Z | iamwangyabin | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7180 | false | [
"> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data... |
2,552,387,980 | 7,179 | Support Python 3.11 | closed | Support Python 3.11.
Fix #7178. | true | 2024-09-27T08:55:44Z | 2024-10-08T16:21:06Z | 2024-10-08T16:21:03Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7179 | 2024-10-08T16:21:03Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7179 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7179). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,552,378,330 | 7,178 | Support Python 3.11 | closed | Support Python 3.11: https://peps.python.org/pep-0664/ | true | 2024-09-27T08:50:47Z | 2024-10-08T16:21:04Z | 2024-10-08T16:21:04Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7178 | false | [] |
2,552,371,082 | 7,177 | Fix release instructions | closed | Fix release instructions.
During last release, I had to make this additional update. | true | 2024-09-27T08:47:01Z | 2024-09-27T08:57:35Z | 2024-09-27T08:57:32Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7177 | 2024-09-27T08:57:32Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7177 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7177). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,551,025,564 | 7,176 | fix grammar in fingerprint.py | open | I see this error all the time and it was starting to get to me. | true | 2024-09-26T16:13:42Z | 2024-09-26T16:13:42Z | null | jxmorris12 | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7176 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7176 | true | [] |
2,550,957,337 | 7,175 | [FSTimeoutError] load_dataset | closed | ### Describe the bug
When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`.
### Error
```
TimeoutError:
The above exception was the direct cause of the following exception:
FSTimeoutError Trac... | true | 2024-09-26T15:42:29Z | 2025-02-01T09:09:35Z | 2024-09-30T17:28:35Z | cosmo3769 | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7175 | false | [
"Is this `FSTimeoutError` due to download network issue from remote resource (from where it is being accessed)?",
"It seems to happen for all datasets, not just a specific one, and especially for versions after 3.0. (3.0.0, 3.0.1 have this problem)\r\n\r\nI had the same error on a different dataset, but after dow... |
2,549,892,315 | 7,174 | Set dev version | closed | true | 2024-09-26T08:30:11Z | 2024-09-26T08:32:39Z | 2024-09-26T08:30:21Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7174 | 2024-09-26T08:30:21Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7174 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7174). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,549,882,529 | 7,173 | Release: 3.0.1 | closed | true | 2024-09-26T08:25:54Z | 2024-09-26T08:28:29Z | 2024-09-26T08:26:03Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7173 | 2024-09-26T08:26:03Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7173 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7173). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,549,781,691 | 7,172 | Add torchdata as a regular test dependency | closed | Add `torchdata` as a regular test dependency.
Note that previously, `torchdata` was installed from their repo and current main branch (0.10.0.dev) requires Python>=3.9.
Also note they made a recent release: 0.8.0 on Jul 31, 2024.
Fix #7171. | true | 2024-09-26T07:45:55Z | 2024-09-26T08:12:12Z | 2024-09-26T08:05:40Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7172 | 2024-09-26T08:05:40Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7172 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7172). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,549,738,919 | 7,171 | CI is broken: No solution found when resolving dependencies | closed | See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297
```
Run uv pip install --system -r additional-tests-requirements.txt --no-deps
× No solution found when resolving dependencies:
╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9
and torchdata=... | true | 2024-09-26T07:24:58Z | 2024-09-26T08:05:41Z | 2024-09-26T08:05:41Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/7171 | false | [] |
2,546,944,016 | 7,170 | Support JSON lines with missing columns | closed | Support JSON lines with missing columns.
Fix #7169.
The implemented test raised:
```
datasets.table.CastError: Couldn't cast
age: int64
to
{'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)}
because column names don't match
```
Related to:
- #7160
- #7162 | true | 2024-09-25T05:08:15Z | 2024-09-26T06:42:09Z | 2024-09-26T06:42:07Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7170 | 2024-09-26T06:42:07Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7170 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7170). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,546,894,076 | 7,169 | JSON lines with missing columns raise CastError | closed | JSON lines with missing columns raise CastError:
> CastError: Couldn't cast ... to ... because column names don't match
Related to:
- #7159
- #7161 | true | 2024-09-25T04:43:28Z | 2024-09-26T06:42:08Z | 2024-09-26T06:42:08Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/7169 | false | [] |
2,546,710,631 | 7,168 | sd1.5 diffusers controlnet training script gives new error | closed | ### Describe the bug
This will randomly pop up during training now
```
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main
... | true | 2024-09-25T01:42:49Z | 2024-09-30T05:24:03Z | 2024-09-30T05:24:02Z | Night1099 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7168 | false | [
"not sure why the issue is formatting oddly",
"I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071",
"this turned out to be because of a bad image in dataset"
] |
2,546,708,014 | 7,167 | Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers | closed | ### Describe the bug
```
Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s]
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <mod... | true | 2024-09-25T01:39:51Z | 2024-09-30T05:28:15Z | 2024-09-30T05:28:04Z | Night1099 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7167 | false | [
"this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, ... |
2,545,608,736 | 7,166 | fix docstring code example for distributed shuffle | closed | close https://github.com/huggingface/datasets/issues/7163 | true | 2024-09-24T14:39:54Z | 2024-09-24T14:42:41Z | 2024-09-24T14:40:14Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7166 | 2024-09-24T14:40:14Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7166 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7166). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,544,972,541 | 7,165 | fix increase_load_count | closed | it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard | true | 2024-09-24T10:14:40Z | 2024-09-24T17:31:07Z | 2024-09-24T13:48:00Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7165 | 2024-09-24T13:48:00Z | 3 | 1 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7165 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tested a few load_dataset and they do show up in download stats now",
"Thanks for h... |
2,544,757,297 | 7,164 | fsspec.exceptions.FSTimeoutError when downloading dataset | open | ### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Dow... | true | 2024-09-24T08:45:05Z | 2025-04-09T22:25:56Z | null | timonmerk | NONE | null | null | 7 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7164 | false | [
"Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\... |
2,542,361,234 | 7,163 | Set explicit seed in iterable dataset ddp shuffling example | closed | ### Describe the bug
In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset
the ddp example shuffles without seeding
```python
from datasets.distributed import split_dataset_by_node
ids = ds.to_iterable_dataset(num_sh... | true | 2024-09-23T11:34:06Z | 2024-09-24T14:40:15Z | 2024-09-24T14:40:15Z | alex-hh | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7163 | false | [
"thanks for reporting !"
] |
2,542,323,382 | 7,162 | Support JSON lines with empty struct | closed | Support JSON lines with empty struct.
Fix #7161.
Related to:
- #7160 | true | 2024-09-23T11:16:12Z | 2024-09-23T11:30:08Z | 2024-09-23T11:30:06Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7162 | 2024-09-23T11:30:06Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7162 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7162). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,541,971,931 | 7,161 | JSON lines with empty struct raise ArrowTypeError | closed | JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
> ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_c... | true | 2024-09-23T08:48:56Z | 2024-09-25T04:43:44Z | 2024-09-23T11:30:07Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/7161 | false | [] |
2,541,877,813 | 7,160 | Support JSON lines with missing struct fields | closed | Support JSON lines with missing struct fields.
Fix #7159.
The implemented test raised:
```
TypeError: Couldn't cast array of type
struct<age: int64>
to
{'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)}
``` | true | 2024-09-23T08:04:09Z | 2024-09-23T11:09:19Z | 2024-09-23T11:09:17Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7160 | 2024-09-23T11:09:17Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7160 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7160). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,541,865,613 | 7,159 | JSON lines with missing struct fields raise TypeError: Couldn't cast array | closed | JSON lines with missing struct fields raise TypeError: Couldn't cast array of type.
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
One would expect that the struct missing fields are added with null values. | true | 2024-09-23T07:57:58Z | 2024-10-21T08:07:07Z | 2024-09-23T11:09:18Z | albertvillanova | MEMBER | null | null | 1 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/7159 | false | [
"Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wik... |
2,541,494,765 | 7,158 | google colab ex | closed | true | 2024-09-23T03:29:50Z | 2024-12-20T16:41:07Z | 2024-12-20T16:41:07Z | docfhsp | NONE | https://github.com/huggingface/datasets/pull/7158 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7158 | true | [] | |
2,540,354,890 | 7,157 | Fix zero proba interleave datasets | closed | fix https://github.com/huggingface/datasets/issues/7147 | true | 2024-09-21T15:19:14Z | 2024-09-24T14:33:54Z | 2024-09-24T14:33:54Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7157 | null | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7157 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7157). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,539,360,617 | 7,156 | interleave_datasets resets shuffle state | open | ### Describe the bug
```
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={'shards': list(range(25))}
)
dataset = dataset.shuffle(buffer_size=1)
dataset... | true | 2024-09-20T17:57:54Z | 2025-03-18T10:56:25Z | null | jonathanasdf | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7156 | false | [
"It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`"
] |
2,533,641,870 | 7,155 | Dataset viewer not working! Failure due to more than 32 splits. | closed | Hello guys,
I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoi... | true | 2024-09-18T12:43:21Z | 2024-09-18T13:20:03Z | 2024-09-18T13:20:03Z | sleepingcat4 | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7155 | false | [
"I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. "
] |
2,532,812,323 | 7,154 | Support ndjson data files | closed | Support `ndjson` (Newline Delimited JSON) data files.
Fix #7153. | true | 2024-09-18T06:10:10Z | 2024-09-19T11:25:17Z | 2024-09-19T11:25:14Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7154 | 2024-09-19T11:25:14Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7154 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7154). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for your review, @severo.\r\n\r\nYes, I was aware of this. From internal convers... |
2,532,788,555 | 7,153 | Support data files with .ndjson extension | closed | ### Feature request
Support data files with `.ndjson` extension.
### Motivation
We already support data files with `.jsonl` extension.
### Your contribution
I am opening a PR. | true | 2024-09-18T05:54:45Z | 2024-09-19T11:25:15Z | 2024-09-19T11:25:15Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7153 | false | [] |
2,527,577,048 | 7,151 | Align filename prefix splitting with WebDataset library | closed | Align filename prefix splitting with WebDataset library.
This PR uses the same `base_plus_ext` function as the one used by the `webdataset` library.
Fix #7150.
Related to #7144. | true | 2024-09-16T06:07:39Z | 2024-09-16T15:26:36Z | 2024-09-16T15:26:34Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7151 | 2024-09-16T15:26:34Z | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7151 | true | [] |
2,527,571,175 | 7,150 | WebDataset loader splits keys differently than WebDataset library | closed | As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames.
For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`:
... | true | 2024-09-16T06:02:47Z | 2024-09-16T15:26:35Z | 2024-09-16T15:26:35Z | albertvillanova | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [
"bug"
] | https://github.com/huggingface/datasets/issues/7150 | false | [] |
2,524,497,448 | 7,149 | Datasets Unknown Keyword Argument Error - task_templates | closed | ### Describe the bug
Issue
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
Gives error
```
TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates'
```
A simple downgrade to lower `data... | true | 2024-09-13T10:30:57Z | 2025-03-06T07:11:55Z | 2024-09-13T14:10:48Z | varungupta31 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7149 | false | [
"Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n",
"Hello @albertvillanova \r\n\r\nI got... |
2,523,833,413 | 7,148 | Bug: Error when downloading mteb/mtop_domain | closed | ### Describe the bug
When downloading the dataset "mteb/mtop_domain", ran into the following error:
```
Traceback (most recent call last):
File "/share/project/xzy/test/test_download.py", line 3, in <module>
data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True)
File "/opt/conda/lib/pytho... | true | 2024-09-13T04:09:39Z | 2024-09-14T15:11:35Z | 2024-09-14T15:11:35Z | ZiyiXia | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7148 | false | [
"Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```",
"Seems the error is still there",
"I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\... |
2,523,129,465 | 7,147 | IterableDataset strange deadlock | closed | ### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": lis... | true | 2024-09-12T18:59:33Z | 2024-09-23T09:32:27Z | 2024-09-21T17:37:34Z | jonathanasdf | NONE | null | null | 6 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7147 | false | [
"Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard ... |
2,519,820,162 | 7,146 | Set dev version | closed | true | 2024-09-11T13:53:27Z | 2024-09-12T04:34:08Z | 2024-09-12T04:34:06Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7146 | 2024-09-12T04:34:06Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7146 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,519,789,724 | 7,145 | Release: 3.0.0 | closed | true | 2024-09-11T13:41:47Z | 2024-09-11T13:48:42Z | 2024-09-11T13:48:41Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7145 | 2024-09-11T13:48:41Z | 1 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7145 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7145). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | |
2,519,393,560 | 7,144 | Fix key error in webdataset | closed | I was running into
```
example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
KeyError: 'png'
```
The issue is that a filename may have multiple "." e.g. `22.05.png`. Changing `split` to `rsplit` fixes it.
Related https://github.com/huggingface/datasets/issues/68... | true | 2024-09-11T10:50:17Z | 2025-01-15T10:32:43Z | 2024-09-13T04:31:37Z | ragavsachdeva | NONE | https://github.com/huggingface/datasets/pull/7144 | null | 8 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7144 | true | [
"hi ! What version of `datasets` are you using ? Is this issue also happening with `datasets==3.0.0` ?\r\nAsking because we made sure to replicate the official webdataset logic, which is to use the latest dot as separator between the sample base name and the key",
"Hi, yes this is still a problem on `datasets==3.... |
2,512,327,211 | 7,143 | Modify add_column() to optionally accept a FeatureType as param | closed | Fix #7142.
**Before (Add + Cast)**:
```
from datasets import load_dataset, Value
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
ds = ds.add_column("new_col", lst)
# Assigns int64 to new_col by default
print(ds.features)
ds = ds.cast_column("new_col", Value(dtype="u... | true | 2024-09-08T10:56:57Z | 2024-09-17T06:01:23Z | 2024-09-16T15:11:01Z | varadhbhatnagar | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7143 | 2024-09-16T15:11:01Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7143 | true | [
"Requesting review @lhoestq \r\nI will also update the docs if this looks good.",
"Cool ! maybe you can rename the argument `feature` and with type `FeatureType` ? This way it would work the same way as `.cast_column()` ?",
"@lhoestq Since there is no way to get a `pyarrow.Schema` from a `FeatureType`, I had to... |
2,512,244,938 | 7,142 | Specifying datatype when adding a column to a dataset. | closed | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desi... | true | 2024-09-08T07:34:24Z | 2024-09-17T03:46:32Z | 2024-09-17T03:46:32Z | varadhbhatnagar | CONTRIBUTOR | null | null | 1 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7142 | false | [
"#self-assign"
] |
2,510,797,653 | 7,141 | Older datasets throwing safety errors with 2.21.0 | closed | ### Describe the bug
The dataset loading was throwing some safety errors for this popular dataset `wmt14`.
[in]:
```
import datasets
# train_data = datasets.load_dataset("wmt14", "de-en", split="train")
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset(... | true | 2024-09-06T16:26:30Z | 2024-09-06T21:14:14Z | 2024-09-06T19:09:29Z | alvations | NONE | null | null | 17 | 29 | 26 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7141 | false | [
"I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval",
"Me too, didn't have this issue few hours ago.",
"same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?... |
2,508,078,858 | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | open | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
... | true | 2024-09-05T15:12:22Z | 2024-10-09T04:02:41Z | null | fscdc | NONE | null | null | 2 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7139 | false | [
"Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set... |
2,507,738,308 | 7,138 | Cache only changed columns? | open | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
#... | true | 2024-09-05T12:56:47Z | 2024-09-20T13:27:20Z | null | Modexus | CONTRIBUTOR | null | null | 2 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7138 | false | [
"so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.",
"yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files"
] |
2,506,851,048 | 7,137 | [BUG] dataset_info sequence unexpected behavior in README.md YAML | open | ### Describe the bug
When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly.
My data looks like
```
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
```
My `dataset_info` in README.md is:
```
dataset_info:
- config_name: default
feature... | true | 2024-09-05T06:06:06Z | 2024-09-09T15:55:50Z | null | ain-soph | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7137 | false | [
"The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n ... |
2,506,115,857 | 7,136 | Do not consume unnecessary memory during sharding | open | When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it.
```shell
pytest tests/test_distributed.py -k iterable
```
Runs successfully. | true | 2024-09-04T19:26:06Z | 2024-09-04T19:28:23Z | null | janEbert | NONE | https://github.com/huggingface/datasets/pull/7136 | null | 0 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7136 | true | [] |
2,503,318,328 | 7,135 | Bug: Type Mismatch in Dataset Mapping | open | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of ... | true | 2024-09-03T16:37:01Z | 2024-09-05T14:09:05Z | null | marko1616 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7135 | false | [
"By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dic... |
2,499,484,041 | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | open | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a... | true | 2024-09-01T13:55:41Z | 2024-09-02T10:34:53Z | null | navidmafi | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7134 | false | [] |
2,496,474,495 | 7,133 | remove filecheck to enable symlinks | closed | Enables streaming from local symlinks #7083
@lhoestq | true | 2024-08-30T07:36:56Z | 2024-12-24T14:25:22Z | 2024-12-24T14:25:22Z | fschlatt | CONTRIBUTOR | https://github.com/huggingface/datasets/pull/7133 | 2024-12-24T14:25:22Z | 6 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7133 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI is failing, looks like it breaks imagefolder loading.\r\n\r\nI just checked fssp... |
2,494,510,464 | 7,132 | Fix data file module inference | open | I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split.
Now when trying to load the dataset, an error is raised that not all splits have the same data format:
> ValueError: Couldn't infer the same da... | true | 2024-08-29T13:48:16Z | 2024-09-02T19:52:13Z | null | HennerM | NONE | https://github.com/huggingface/datasets/pull/7132 | null | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7132 | true | [
"Hi ! datasets saved using `save_to_disk` should be loaded with `load_from_disk` ;)",
"It is convienient to just pass in a path to a local dataset or one from the hub and use the same function to load it. Is it not possible to get this fix merged in to allow this? ",
"We can modify `save_to_disk` to write the d... |
2,491,942,650 | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | closed | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
... | true | 2024-08-28T12:27:48Z | 2024-12-06T11:32:02Z | 2024-12-06T11:32:02Z | sergiopaniego | MEMBER | null | null | 0 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7129 | false | [] |
2,490,274,775 | 7,128 | Filter Large Dataset Entry by Entry | open | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset.... | true | 2024-08-27T20:31:09Z | 2024-10-07T23:37:44Z | null | QiyaoWei | NONE | null | null | 4 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7128 | false | [
"Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n",
"Jumping on this as it seems relevant - when I use the `filter` method, it often results in an... |
2,486,524,966 | 7,127 | Caching shuffles by np.random.Generator results in unintiutive behavior | open | ### Describe the bug
Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles.
Load dataset from disk again. Shuffle and Iterate. See same result ... | true | 2024-08-26T10:29:48Z | 2025-03-10T17:12:57Z | null | el-hult | NONE | null | null | 1 | 0 | 0 | 0 | null | false | [] | https://github.com/huggingface/datasets/issues/7127 | false | [
"I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be... |
2,485,939,495 | 7,126 | Disable implicit token in CI | closed | Disable implicit token in CI.
This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in:
- #7124 | true | 2024-08-26T05:29:46Z | 2024-08-26T06:05:01Z | 2024-08-26T05:59:15Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7126 | 2024-08-26T05:59:15Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7126 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,485,912,246 | 7,125 | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport | closed | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport. | true | 2024-08-26T05:09:35Z | 2024-08-26T05:33:15Z | 2024-08-26T05:27:09Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7125 | 2024-08-26T05:27:09Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7125 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,485,890,442 | 7,124 | Test get_dataset_config_info with non-existing/gated/private dataset | closed | Test get_dataset_config_info with non-existing/gated/private dataset.
Related to:
- #7109
See also:
- https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb | true | 2024-08-26T04:53:59Z | 2024-08-26T06:15:33Z | 2024-08-26T06:09:42Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7124 | 2024-08-26T06:09:42Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7124 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,484,003,937 | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | open | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is th... | true | 2024-08-23T22:56:01Z | 2024-10-17T09:13:47Z | null | egrace479 | NONE | null | null | 3 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7123 | false | [
"Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\... |
2,482,491,258 | 7,122 | [interleave_dataset] sample batches from a single source at a time | open | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar man... | true | 2024-08-23T07:21:15Z | 2024-08-23T07:21:15Z | null | memray | NONE | null | null | 0 | 0 | 0 | 0 | null | false | [
"enhancement"
] | https://github.com/huggingface/datasets/issues/7122 | false | [] |
2,480,978,483 | 7,121 | Fix typed examples iterable state dict | closed | fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13 | true | 2024-08-22T14:45:03Z | 2024-08-22T14:54:56Z | 2024-08-22T14:49:06Z | lhoestq | MEMBER | https://github.com/huggingface/datasets/pull/7121 | 2024-08-22T14:49:06Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7121 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7121). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
2,480,674,237 | 7,120 | don't mention the script if trust_remote_code=False | closed | See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is:
```
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't f... | true | 2024-08-22T12:32:32Z | 2024-08-22T14:39:52Z | 2024-08-22T14:33:52Z | severo | COLLABORATOR | https://github.com/huggingface/datasets/pull/7120 | 2024-08-22T14:33:52Z | 3 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7120 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7120). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Note that in this case, we could even expect this kind of message:\r\n\r\n```\r\nDataFi... |
2,477,766,493 | 7,119 | Install transformers with numpy-2 CI | closed | Install transformers with numpy-2 CI.
Note that transformers no longer pins numpy < 2 since transformers-4.43.0:
- https://github.com/huggingface/transformers/pull/32018
- https://github.com/huggingface/transformers/releases/tag/v4.43.0 | true | 2024-08-21T11:14:59Z | 2024-08-21T11:42:35Z | 2024-08-21T11:36:50Z | albertvillanova | MEMBER | https://github.com/huggingface/datasets/pull/7119 | 2024-08-21T11:36:50Z | 2 | 0 | 0 | 0 | false | false | [] | https://github.com/huggingface/datasets/pull/7119 | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.