html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 18 36.2k | comment_length int64 16 1.52k | text stringlengths 164 54.1k |
|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.
In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.
Note that datasets loaded from disk (memory mapped) are not loaded in memory,... | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 63 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | Thank you for your detailed reply.
> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.
I understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming? | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 50 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example. | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 24 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | Apparently some JSON objects have a `"labels"` field. Since this field is not present in every object, you must specify all the fields types in the README.md
EDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 48 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks! | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 17 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... |
https://github.com/huggingface/datasets/issues/5594 | Error while downloading the xtreme udpos dataset | Hi! I cannot reproduce this error on my machine.
The raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:
```python
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode... | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | 45 | Error while downloading the xtreme udpos dataset
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtre... |
https://github.com/huggingface/datasets/issues/5586 | .sort() is broken when used after .filter(), only in 2.10.0 | Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
... | 19 | .sort() is broken when used after .filter(), only in 2.10.0
### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of t... |
https://github.com/huggingface/datasets/issues/5585 | Cache is not transportable | Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.
In particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hash... | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I... | 85 | Cache is not transportable
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereb... |
https://github.com/huggingface/datasets/issues/5584 | Unable to load coyo700M dataset | Hi @manuaero
Thank you for your interest in the COYO dataset.
Our dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.
We provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README.md) to download,... | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coy... | 49 | Unable to load coyo700M dataset
### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and prepari... |
https://github.com/huggingface/datasets/issues/5577 | Cannot load `the_pile_openwebtext2` | Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.
| ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load... | 18 | Cannot load `the_pile_openwebtext2`
### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
... |
https://github.com/huggingface/datasets/issues/5575 | Metadata for each column | Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level, so implementing this should be straightforward. The API I have in mind would work as follows:
```python
col_feature = Value("string", metadata="Some column-level metadata")
features = Features({"col": col_featur... | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | 47 | Metadata for each column
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of pre... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | Also encountering this issue for every dataset I try to stream! Installed datasets from main:
```
- `datasets` version: 2.10.1.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
```
Repro:
```python
from datasets import load_dataset
spig... | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 655 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | This problem now appears again, this time with an underlying HTTP 502 status code:
```
aiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')
``` | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 21 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-cont... | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 22 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... |
https://github.com/huggingface/datasets/issues/5571 | load_dataset fails for JSON in windows | Hi!
You need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:
```python
ds = load_dataset("json", data_files=args.input_json)
```
| ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is di... | 24 | load_dataset fails for JSON in windows
### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local Py... |
https://github.com/huggingface/datasets/issues/5570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it? | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | 29 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once acce... |
https://github.com/huggingface/datasets/issues/5568 | dataset.to_iterable_dataset() loses useful info like dataset features | Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.
Setting this as a good first issue if someone would like to contribute, otherwise we can take care of it :) | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | 38 | dataset.to_iterable_dataset() loses useful info like dataset features
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata l... |
https://github.com/huggingface/datasets/issues/5568 | dataset.to_iterable_dataset() loses useful info like dataset features | seems like the feature parameter is missing from `return IterableDataset.from_generator(Dataset._iter_shards, gen_kwargs={"shards": shards})` hence it defaults to None. | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | 17 | dataset.to_iterable_dataset() loses useful info like dataset features
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata l... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | Hi ! The indices mapping is written in the same cachedirectory as your dataset.
Can you run this to show your current cache directory ?
```python
print(train_dataset.cache_files)
``` | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 28 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | ```
[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]
```
These are the actual paths where `.hf` files are stored. | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 16 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | I'm not aware of any `.hf` file ? What are you referring to ?
Also the error says "Protocol unknown: parent". Is there a chance you may have ended up with a path that contains this string `parent://` ? | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 39 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | I figured out why the issue was occuring but don't know the long-term fix.
The dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.
Quick fix is to not use colons in filename. But if this ... | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 76 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5546 | Downloaded datasets do not cache at $HF_HOME | Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?
Then you can print
```python
print(datasets.config.HF_CACHE_HOME)
print(datasets.config.HF_DATASETS_CACHE)
``` | ### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, t... | 21 | Downloaded datasets do not cache at $HF_HOME
### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at H... |
https://github.com/huggingface/datasets/issues/5543 | the pile datasets url seems to change back | Thanks for reporting, @wjfwzzc.
I am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1 | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | 17 | the pile datasets url seems to change back
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_datase... |
https://github.com/huggingface/datasets/issues/5543 | the pile datasets url seems to change back | Thank you. All fixes are done:
- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2
- [x] https://huggingface.co/datasets/the_pile/discussions/1
- [x] https://huggingface.co/datasets/the_pile_books3/discussions/1
- [x] https://huggingface.co/datasets/the_pile_openwebtext2/discussions/2
- [x] https://... | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | 21 | the pile datasets url seems to change back
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_datase... |
https://github.com/huggingface/datasets/issues/5541 | Flattening indices in selected datasets is extremely inefficient | Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:
```
Num chunks for original ds: 1
Original ds save/load
save_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s
load_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0... | ### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. Thi... | 117 | Flattening indices in selected datasets is extremely inefficient
### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat datase... |
https://github.com/huggingface/datasets/issues/5539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(bat... | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | 78 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib... |
https://github.com/huggingface/datasets/issues/5539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | > Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:
>
> ```python
> from datasets import load_dataset
> import torch
>
> dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='tr... | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | 104 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib... |
https://github.com/huggingface/datasets/issues/5538 | load_dataset in seaborn is not working for me. getting this error. | Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead. | TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chu... | 32 | load_dataset in seaborn is not working for me. getting this error.
TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selec... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | Hi ! `enc` is not hashable:
```python
import tiktoken
from datasets.fingerprint import Hasher
enc = tiktoken.get_encoding("gpt2")
Hasher.hash(enc)
# raises TypeError: cannot pickle 'builtins.CoreBPE' object
```
It happens because it's not picklable, and because of that it's not possible to cache the result of... | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 83 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | @lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose.
Great job with huggingface! | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 24 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :
```
File "/opt/conda/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/opt/conda/lib/python3.8/... | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 60 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... |
https://github.com/huggingface/datasets/issues/5534 | map() breaks at certain dataset size when using Array3D | Hi! This code works for me locally or in Colab. What's the output of `python -c "import pyarrow as pa; print(pa.__version__)"` when you run it inside your environment? | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent cal... | 28 | map() breaks at certain dataset size when using Array3D
### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with... |
https://github.com/huggingface/datasets/issues/5532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | Hi! You can get this behavior by specifying `stratify_by_column="label"` in `train_test_split`.
This is the full example:
```python
import numpy as np
from datasets import Dataset, ClassLabel
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "examp... | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
... | 88 | train_test_split in arrow_dataset does not ensure to keep single classes in test set
### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
##... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Thanks for reporting, @TJ-Solergibert.
We cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`
Could you please make it publicly accessible?
| ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 33 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | I swear it's public, I've checked the settings and I've been able to open it in incognito mode.
Notebook: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?usp=sharing
Anyway, this is the code to reproduce the error:
```python3
from datasets import ClassLabel
from datasets import load... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 226 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Thanks, @TJ-Solergibert. I can access your notebook now. Maybe it was just a temporary issue.
At first sight, it seems something related to your data: maybe some of the examples do not have all the transcriptions for all the languages. Then, some of them are null when unrolled. And when trying to concatenate with th... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 80 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | See, in this example, "nl" and "ro" transcripts are null:
```python
>>> europarl_ds["test"][:1]
{'original_speech': ['− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta'],
'original_lang... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 458 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | You can fix this issue by forcing the cast of None to str by hand:
- If you replace this line:
```python
source_t += batch[src_lang]
```
- With this line (because the batch size is 1):
```python
source_t += [str(batch[src_lang][0])]
```
- Or with this line (if the batch size were larger than 1):
```python
so... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 63 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Problem solved! Thanks @albertvillanova, now I have even increased the batch size and it's crazy fast :rocket: ! | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 18 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Hi! This behavior stems from these lines:
https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46
I agree we should preserve the original type whenever possible and downcast explicitly with a warning.
@lhoestq Do you remember why we ... | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 38 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | I was also wondering why the default type logic is needed. Me just deleting it is probably too naive of a solution. | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 22 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Hmm I think the idea was to end up with the usual default precision for deep learning models - no matter how the data was stored or where it comes from.
For example in NLP we store tokens using an optimized low precision to save disk space, but when we set the format to `torch` we actually need to get `int64`. Altho... | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 123 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Unfortunately removing it for integers is a breaking change for most `transformers` + `datasets` users for NLP (which is a common case). Removing it for floats is a breaking change for `transformers` + `datasets` for ASR as well. And it also is a breaking change for the other users relying on this behavior.
Therefor... | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 102 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | @lhoestq It should be fine to remove this conversion in Datasets 3.0, no? For now, we can warn the user (with a log message) about the future change when the default type is changed. | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 34 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Let's see with the transformers team if it sounds reasonable ? We'd have to fix multiple example scripts though.
If it's not ok we can also explore keeping this behavior only for tokens and audio data. | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 36 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | IMO being coupled with Transformers can lead to unexpected behavior when one tries to use our lib without pairing it with Transformers, so I think it's still important to "fix" this, even if it means we will need to update Transformers' example scripts afterward.
| ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 44 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | For others that run into the same issue: A temporary workaround for me is this:
```python
def numpy_transform(batch):
return {key: np.asarray(val) for key, val in batch.items()}
dataset = dataset.with_transform(numpy_transform)
``` | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 30 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... |
https://github.com/huggingface/datasets/issues/5514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | Hi, thanks for noticing this! We can't just remove the cache control as this allows us to control where the arrow files generated by the ops are written (cached on disk if enabled or a temporary directory if disabled). The right way to address this inconsistency would be by having `load_from_cache_file=None` by default... | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_... | 54 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documenta... |
https://github.com/huggingface/datasets/issues/5514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | Hi! Yes, this seems more plausible. I can implement that. One last thing is the type annotation `load_from_cache_file: bool = None`. Which I then would change to `load_from_cache_file: Optional[bool] = None`. | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_... | 31 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documenta... |
https://github.com/huggingface/datasets/issues/5513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience. | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your inp... | 28 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, ... |
https://github.com/huggingface/datasets/issues/5513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | Hi @mariosasko, ok it makes sense. Anyway, don't you think it's worth it at some point to start a deprecation cycle e.g. `fs` in `load_from_disk`? It doesn't affect user experience but it's for sure a bad practice IMO, but's up to you 😄 Feel free to close this issue otherwise! | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your inp... | 50 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, ... |
https://github.com/huggingface/datasets/issues/5511 | Creating a dummy dataset from a bigger one | Update `datasets` or downgrade `huggingface-hub` ;)
The `huggingface-hub` lib did a breaking change a few months ago, and you're using an old version of `datasets` that does't support it | ### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset... | 29 | Creating a dummy dataset from a bigger one
### Describe the bug
I often want to create a dummy dataset from a bigger dataset for fast iteration when training. However, I'm having a hard time doing this especially when trying to upload the dataset to the Hub.
### Steps to reproduce the bug
```python
from datasets ... |
https://github.com/huggingface/datasets/issues/5508 | Saving a dataset after setting format to torch doesn't work, but only if filtering | Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot? | ### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [..... | 25 | Saving a dataset after setting format to torch doesn't work, but only if filtering
### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # s... |
https://github.com/huggingface/datasets/issues/5506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | Hi ! `datasets` doesn't do batching - the PyTorch DataLoader does and is created by the `Trainer`. Do you pass other arguments to training_args with respect to data loading ?
Also we recently released `.to_iterable_dataset` that does pretty much what you implemented, but using contiguous shards to get a better speed... | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous... | 61 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](ht... |
https://github.com/huggingface/datasets/issues/5506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | This is the full set of training args passed. No training args were changed when switching dataset types.
```python
training_args = TrainingArguments(
output_dir="./checkpoints",
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=256,
save_steps=2000,
save_total... | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous... | 43 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](ht... |
https://github.com/huggingface/datasets/issues/5506 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs | Makes sense. Given that it's a `transformers` issue and already being tracked, I'll close this out. | ### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous... | 16 | IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
### Describe the bug
I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256.
Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](ht... |
https://github.com/huggingface/datasets/issues/5505 | PyTorch BatchSampler still loads from Dataset one-by-one | This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)
Thanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated documentati... | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the on... | 60 | PyTorch BatchSampler still loads from Dataset one-by-one
### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is ... |
https://github.com/huggingface/datasets/issues/5505 | PyTorch BatchSampler still loads from Dataset one-by-one | Yeah I figured this was the sort of thing that probably once worked. I can confirm that you no longer need the batch sampler, just `batch_size=n` in the `DataLoader`.
I'll pass on the PR, I'm flat out right now, sorry. | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the on... | 40 | PyTorch BatchSampler still loads from Dataset one-by-one
### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is ... |
https://github.com/huggingface/datasets/issues/5499 | `load_dataset` has ~4 seconds of overhead for cached data | Hi ! To skip the verification step that checks if newer data exist, you can enable offline mode with `HF_DATASETS_OFFLINE=1`.
Although I agree this step should be much faster for datasets hosted on the HF Hub - we could just compare the commit hash from the local data and the remote git repository. We're not been le... | ### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk... | 77 | `load_dataset` has ~4 seconds of overhead for cached data
### Feature request
When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory).
This is particularly noticeable for smaller datasets. For example, wikitex... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.
In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.
Note that datasets loaded from disk (memory mapped) are not loaded in memory,... | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 63 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | Thank you for your detailed reply.
> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.
I understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming? | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 50 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example. | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 24 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | Apparently some JSON objects have a `"labels"` field. Since this field is not present in every object, you must specify all the fields types in the README.md
EDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 48 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks! | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 17 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... |
https://github.com/huggingface/datasets/issues/5594 | Error while downloading the xtreme udpos dataset | Hi! I cannot reproduce this error on my machine.
The raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:
```python
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode... | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | 45 | Error while downloading the xtreme udpos dataset
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtre... |
https://github.com/huggingface/datasets/issues/5586 | .sort() is broken when used after .filter(), only in 2.10.0 | Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
... | 19 | .sort() is broken when used after .filter(), only in 2.10.0
### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of t... |
https://github.com/huggingface/datasets/issues/5585 | Cache is not transportable | Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.
In particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hash... | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I... | 85 | Cache is not transportable
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereb... |
https://github.com/huggingface/datasets/issues/5584 | Unable to load coyo700M dataset | Hi @manuaero
Thank you for your interest in the COYO dataset.
Our dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.
We provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README.md) to download,... | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coy... | 49 | Unable to load coyo700M dataset
### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and prepari... |
https://github.com/huggingface/datasets/issues/5577 | Cannot load `the_pile_openwebtext2` | Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.
| ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load... | 18 | Cannot load `the_pile_openwebtext2`
### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
... |
https://github.com/huggingface/datasets/issues/5575 | Metadata for each column | Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level, so implementing this should be straightforward. The API I have in mind would work as follows:
```python
col_feature = Value("string", metadata="Some column-level metadata")
features = Features({"col": col_featur... | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | 47 | Metadata for each column
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of pre... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | Also encountering this issue for every dataset I try to stream! Installed datasets from main:
```
- `datasets` version: 2.10.1.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
```
Repro:
```python
from datasets import load_dataset
spig... | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 655 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | This problem now appears again, this time with an underlying HTTP 502 status code:
```
aiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')
``` | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 21 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-cont... | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 22 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... |
https://github.com/huggingface/datasets/issues/5571 | load_dataset fails for JSON in windows | Hi!
You need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:
```python
ds = load_dataset("json", data_files=args.input_json)
```
| ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is di... | 24 | load_dataset fails for JSON in windows
### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local Py... |
https://github.com/huggingface/datasets/issues/5570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it? | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | 29 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once acce... |
https://github.com/huggingface/datasets/issues/5568 | dataset.to_iterable_dataset() loses useful info like dataset features | Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.
Setting this as a good first issue if someone would like to contribute, otherwise we can take care of it :) | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | 38 | dataset.to_iterable_dataset() loses useful info like dataset features
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata l... |
https://github.com/huggingface/datasets/issues/5568 | dataset.to_iterable_dataset() loses useful info like dataset features | seems like the feature parameter is missing from `return IterableDataset.from_generator(Dataset._iter_shards, gen_kwargs={"shards": shards})` hence it defaults to None. | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | 17 | dataset.to_iterable_dataset() loses useful info like dataset features
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata l... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | Hi ! The indices mapping is written in the same cachedirectory as your dataset.
Can you run this to show your current cache directory ?
```python
print(train_dataset.cache_files)
``` | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 28 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | ```
[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]
```
These are the actual paths where `.hf` files are stored. | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 16 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | I'm not aware of any `.hf` file ? What are you referring to ?
Also the error says "Protocol unknown: parent". Is there a chance you may have ended up with a path that contains this string `parent://` ? | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 39 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | I figured out why the issue was occuring but don't know the long-term fix.
The dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.
Quick fix is to not use colons in filename. But if this ... | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 76 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... |
https://github.com/huggingface/datasets/issues/5546 | Downloaded datasets do not cache at $HF_HOME | Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?
Then you can print
```python
print(datasets.config.HF_CACHE_HOME)
print(datasets.config.HF_DATASETS_CACHE)
``` | ### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, t... | 21 | Downloaded datasets do not cache at $HF_HOME
### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at H... |
https://github.com/huggingface/datasets/issues/5543 | the pile datasets url seems to change back | Thanks for reporting, @wjfwzzc.
I am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1 | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | 17 | the pile datasets url seems to change back
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_datase... |
https://github.com/huggingface/datasets/issues/5543 | the pile datasets url seems to change back | Thank you. All fixes are done:
- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2
- [x] https://huggingface.co/datasets/the_pile/discussions/1
- [x] https://huggingface.co/datasets/the_pile_books3/discussions/1
- [x] https://huggingface.co/datasets/the_pile_openwebtext2/discussions/2
- [x] https://... | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | 21 | the pile datasets url seems to change back
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_datase... |
https://github.com/huggingface/datasets/issues/5541 | Flattening indices in selected datasets is extremely inefficient | Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:
```
Num chunks for original ds: 1
Original ds save/load
save_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s
load_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0... | ### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. Thi... | 117 | Flattening indices in selected datasets is extremely inefficient
### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat datase... |
https://github.com/huggingface/datasets/issues/5539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(bat... | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | 78 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib... |
https://github.com/huggingface/datasets/issues/5539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | > Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:
>
> ```python
> from datasets import load_dataset
> import torch
>
> dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='tr... | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | 104 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib... |
https://github.com/huggingface/datasets/issues/5538 | load_dataset in seaborn is not working for me. getting this error. | Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead. | TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chu... | 32 | load_dataset in seaborn is not working for me. getting this error.
TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selec... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | Hi ! `enc` is not hashable:
```python
import tiktoken
from datasets.fingerprint import Hasher
enc = tiktoken.get_encoding("gpt2")
Hasher.hash(enc)
# raises TypeError: cannot pickle 'builtins.CoreBPE' object
```
It happens because it's not picklable, and because of that it's not possible to cache the result of... | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 83 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | @lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose.
Great job with huggingface! | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 24 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :
```
File "/opt/conda/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/opt/conda/lib/python3.8/... | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 60 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... |
https://github.com/huggingface/datasets/issues/5534 | map() breaks at certain dataset size when using Array3D | Hi! This code works for me locally or in Colab. What's the output of `python -c "import pyarrow as pa; print(pa.__version__)"` when you run it inside your environment? | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent cal... | 28 | map() breaks at certain dataset size when using Array3D
### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with... |
https://github.com/huggingface/datasets/issues/5532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | Hi! You can get this behavior by specifying `stratify_by_column="label"` in `train_test_split`.
This is the full example:
```python
import numpy as np
from datasets import Dataset, ClassLabel
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "examp... | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
... | 88 | train_test_split in arrow_dataset does not ensure to keep single classes in test set
### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
##... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Thanks for reporting, @TJ-Solergibert.
We cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`
Could you please make it publicly accessible?
| ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 33 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | I swear it's public, I've checked the settings and I've been able to open it in incognito mode.
Notebook: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?usp=sharing
Anyway, this is the code to reproduce the error:
```python3
from datasets import ClassLabel
from datasets import load... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 226 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Thanks, @TJ-Solergibert. I can access your notebook now. Maybe it was just a temporary issue.
At first sight, it seems something related to your data: maybe some of the examples do not have all the transcriptions for all the languages. Then, some of them are null when unrolled. And when trying to concatenate with th... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 80 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | See, in this example, "nl" and "ro" transcripts are null:
```python
>>> europarl_ds["test"][:1]
{'original_speech': ['− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta'],
'original_lang... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 458 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | You can fix this issue by forcing the cast of None to str by hand:
- If you replace this line:
```python
source_t += batch[src_lang]
```
- With this line (because the batch size is 1):
```python
source_t += [str(batch[src_lang][0])]
```
- Or with this line (if the batch size were larger than 1):
```python
so... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 63 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Problem solved! Thanks @albertvillanova, now I have even increased the batch size and it's crazy fast :rocket: ! | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 18 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.