html_url stringlengths 51 51 | title stringlengths 6 280 | comments stringlengths 67 24.7k | body stringlengths 51 36.2k | __index_level_0__ int64 1 1.17k | comment_length int64 16 1.45k | text stringlengths 190 38.3k | embeddings list |
|---|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/5638 | xPath to implement all operations for Path | `xPath` is an internal component (it doesn't have a leading underscore in the name, but it should) not meant to be used outside of `datasets`, and it's only tested on HTTP URLs, not S3.
| ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | 256 | 34 | xPath to implement all operations for Path
### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which ... | [
-1.0096476078033447,
-0.856399416923523,
-0.9224860668182373,
1.4335403442382812,
-0.21319352090358734,
-1.237447738647461,
0.24868834018707275,
-1.1852760314941406,
1.7969523668289185,
-0.8976537585258484,
0.4041021764278412,
-1.5997945070266724,
0.11226208508014679,
-0.5927440524101257,
... |
https://github.com/huggingface/datasets/issues/5638 | xPath to implement all operations for Path | Okay I understand that xPath won't support my usecase. What I was perhaps getting to is why not use UPath in `datasets` instead of `xPath` if UPath seems to have strictly more robust implementations. | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | 256 | 34 | xPath to implement all operations for Path
### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which ... | [
-0.9839048385620117,
-0.7708210945129395,
-0.9500556588172913,
1.4291080236434937,
-0.1828235238790512,
-1.2047094106674194,
0.25649920105934143,
-1.209984540939331,
1.7841482162475586,
-0.8390918970108032,
0.42940062284469604,
-1.5716805458068848,
0.1252724677324295,
-0.5470800399780273,
... |
https://github.com/huggingface/datasets/issues/5638 | xPath to implement all operations for Path | It seems like `universal_pathlib` does not support `fsspec` URL chaining (`::` is the chaining symbol) and "compression" filesystems (e.g., `zip`), but this is what we need to access and stream files from within an archive (e.g., we want to stream URLs such as this one: `zip://data.parquet::https://www.dummyurl.com/arc... | ### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally.
### Motivation
I'm using... | 256 | 46 | xPath to implement all operations for Path
### Feature request
Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which ... | [
-1.021508812904358,
-0.8691565990447998,
-0.8554427623748779,
1.4810752868652344,
-0.1754007488489151,
-1.2783641815185547,
0.23315665125846863,
-1.144027590751648,
1.7115973234176636,
-0.8197272419929504,
0.4234815835952759,
-1.6065181493759155,
0.13080665469169617,
-0.6197822690010071,
... |
https://github.com/huggingface/datasets/issues/5637 | IterableDataset with_format does not support 'device' keyword for jax | Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is not tr... | ### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi... | 257 | 51 | IterableDataset with_format does not support 'device' keyword for jax
### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the ... | [
-1.2325011491775513,
-0.940279483795166,
-0.6965106129646301,
1.4070303440093994,
-0.1233501210808754,
-1.2415097951889038,
0.17469142377376556,
-1.0437079668045044,
1.6669445037841797,
-0.774493932723999,
0.34157007932662964,
-1.689767837524414,
0.0415092408657074,
-0.5208538770675659,
... |
https://github.com/huggingface/datasets/issues/5637 | IterableDataset with_format does not support 'device' keyword for jax | Any plans to support it in the future? Or would streaming dataset be left without support for jax and tensorflow? | ### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi... | 257 | 20 | IterableDataset with_format does not support 'device' keyword for jax
### Describe the bug
As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the ... | [
-1.244858741760254,
-0.967814028263092,
-0.7181598544120789,
1.4467706680297852,
-0.1347845196723938,
-1.207514762878418,
0.1415264904499054,
-1.0039390325546265,
1.6438817977905273,
-0.7366711497306824,
0.31190264225006104,
-1.675411343574524,
0.03625664487481117,
-0.49465417861938477,
... |
https://github.com/huggingface/datasets/issues/5634 | Not all progress bars are showing up when they should for downloading dataset | Hi!
By default, tqdm has `leave=True` to "keep all traces of the progress bar upon the termination of iteration". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.
I feel like our TQDM bars are noisy, so I think we should always set `leave=False` and... | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width=... | 258 | 92 | Not all progress bars are showing up when they should for downloading dataset
### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern bu... | [
-1.3264236450195312,
-0.9384112358093262,
-0.6486805081367493,
1.4555368423461914,
-0.2238524854183197,
-1.1209741830825806,
0.16965985298156738,
-1.0841684341430664,
1.5862623453140259,
-0.8087064623832703,
0.2877905070781708,
-1.5388022661209106,
-0.00938345491886139,
-0.5007497668266296... |
https://github.com/huggingface/datasets/issues/5634 | Not all progress bars are showing up when they should for downloading dataset | Hi sorry for the late update. I think the problem still exists despite the `leave` flag
<img width="1105" alt="image" src="https://user-images.githubusercontent.com/110427462/226501615-5b02fb02-fd5f-4eda-b1f7-a7ed6570892d.png">
```
Package Version
------------------------ ---------
aiofiles ... | ### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too.
ipywidgets
<img width=... | 258 | 373 | Not all progress bars are showing up when they should for downloading dataset
### Describe the bug
During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern bu... | [
-1.3264236450195312,
-0.9384112358093262,
-0.6486805081367493,
1.4555368423461914,
-0.2238524854183197,
-1.1209741830825806,
0.16965985298156738,
-1.0841684341430664,
1.5862623453140259,
-0.8087064623832703,
0.2877905070781708,
-1.5388022661209106,
-0.00938345491886139,
-0.5007497668266296... |
https://github.com/huggingface/datasets/issues/5633 | Cannot import datasets | Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem. | ### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Pl... | 259 | 51 | Cannot import datasets
### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c hugg... | [
-1.1801728010177612,
-0.880652904510498,
-0.7368806600570679,
1.455385446548462,
-0.10973728448152542,
-1.303460955619812,
0.10291359573602676,
-1.125065803527832,
1.5669465065002441,
-0.6529191136360168,
0.2321343570947647,
-1.6736702919006348,
-0.13928870856761932,
-0.49055325984954834,
... |
https://github.com/huggingface/datasets/issues/5632 | Dataset cannot convert too large dictionnary | Answered on the forum:
> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww · Pull Request #4800 · huggingface/datasets · GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up with a cl... | ### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I can create the dataset until a certain size of m... | 260 | 63 | Dataset cannot convert too large dictionnary
### Describe the bug
Hello everyone!
I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})".
However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this.
Indeed, I c... | [
-1.3318010568618774,
-0.9440325498580933,
-0.7260366678237915,
1.464353084564209,
-0.1459263116121292,
-1.2061792612075806,
0.1252291053533554,
-1.0885341167449951,
1.6726020574569702,
-0.7644535899162292,
0.24400624632835388,
-1.6093380451202393,
0.10537301003932953,
-0.5523220300674438,
... |
https://github.com/huggingface/datasets/issues/5631 | Custom split names | Hi!
You can also use names other than "train", "validation" and "test". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. | ### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (curren... | 261 | 24 | Custom split names
### Feature request
Hi,
I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more split... | [
-1.1770635843276978,
-0.9241767525672913,
-0.8355674147605896,
1.4496593475341797,
-0.1094215139746666,
-1.1831300258636475,
0.10229720920324326,
-1.085262417793274,
1.5207239389419556,
-0.775841236114502,
0.259285032749176,
-1.6633424758911133,
0.06859844923019409,
-0.5358826518058777,
... |
https://github.com/huggingface/datasets/issues/5629 | load_dataset gives "403" error when using Financial phrasebank | Hi! You seem to be using an outdated version of `datasets` that downloads the older script version. To avoid the error, you can either pass `revision="main"` to `load_dataset` (this can fail if a script uses newer features of the lib) or update your installation with `pip install -U datasets` (better solution). | When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
Has this been seen before? Thanks. The website loads ... | 262 | 51 | load_dataset gives "403" error when using Financial phrasebank
When I try to load this dataset, I receive the following error:
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (er... | [
-1.1847862005233765,
-0.8617866039276123,
-0.7911846041679382,
1.467525601387024,
-0.2067996710538864,
-1.3115179538726807,
0.13070455193519592,
-1.1052074432373047,
1.5948033332824707,
-0.7261543273925781,
0.33966758847236633,
-1.6774890422821045,
0.057503167539834976,
-0.5273615121841431... |
https://github.com/huggingface/datasets/issues/5627 | Unable to load AutoTrain-generated dataset from the hub | The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
... | 263 | 17 | Unable to load AutoTrain-generated dataset from the hub
### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: ... | [
-1.2668479681015015,
-1.1041215658187866,
-0.7530314922332764,
1.728272795677185,
-0.28150415420532227,
-0.9776759147644043,
0.0697939470410347,
-0.991932213306427,
1.5877076387405396,
-0.5884959697723389,
0.22780397534370422,
-1.5597028732299805,
-0.036122385412454605,
-0.7347760200500488... |
https://github.com/huggingface/datasets/issues/5627 | Unable to load AutoTrain-generated dataset from the hub | Okay, good to know. Thanks for the reply. For now I will just have to
manage the split manually before training, because I can’t find any way of
pulling out file indices or file names from the autogenerated split. The
file names field of the image dataset (loaded directly from arrow file) is
missing, just fyi (for anyo... | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
... | 263 | 131 | Unable to load AutoTrain-generated dataset from the hub
### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: ... | [
-1.2668479681015015,
-1.1041215658187866,
-0.7530314922332764,
1.728272795677185,
-0.28150415420532227,
-0.9776759147644043,
0.0697939470410347,
-0.991932213306427,
1.5877076387405396,
-0.5884959697723389,
0.22780397534370422,
-1.5597028732299805,
-0.036122385412454605,
-0.7347760200500488... |
https://github.com/huggingface/datasets/issues/5625 | Allow "jsonl" data type signifier | You can use "json" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. "text", "imagefolder", etc. I don't think the example in `transformers` is correct because of that | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset scri... | 264 | 33 | Allow "jsonl" data type signifier
### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoun... | [
-1.1171170473098755,
-0.9781569242477417,
-0.8593897819519043,
1.5335286855697632,
-0.12153265625238419,
-1.1575781106948853,
0.13303165137767792,
-1.0777931213378906,
1.7873183488845825,
-0.7732706665992737,
0.30229246616363525,
-1.6623814105987549,
-0.007482464425265789,
-0.6368702054023... |
https://github.com/huggingface/datasets/issues/5625 | Allow "jsonl" data type signifier | Yes, I understand the reasoning but this issue is to propose that the example in transformers (while incorrect) "makes sense" in terms of user expectation. So the question is whether it would be possible to add "aliases" for common types (like "json" and "text") based on common extensions (like jsonl and txt)? | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset scri... | 264 | 52 | Allow "jsonl" data type signifier
### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoun... | [
-1.1289620399475098,
-0.965147078037262,
-0.8540281653404236,
1.5681935548782349,
-0.11368619650602341,
-1.1700540781021118,
0.13679121434688568,
-1.0804232358932495,
1.777850866317749,
-0.7727180123329163,
0.3149605393409729,
-1.648427963256836,
-0.0031464705243706703,
-0.6520736217498779... |
https://github.com/huggingface/datasets/issues/5624 | glue datasets returning -1 for test split | Hi @lithafnium, thanks for reporting.
Please note that you can use the "Community" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions
Indeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answered: https://h... | ### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
for d in dataset:
# prints out -1
... | 265 | 71 | glue datasets returning -1 for test split
### Describe the bug
Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online.
### Steps to reproduce the bug
```
dataset = load_dataset("glue", "sst2")
... | [
-1.155807614326477,
-0.882905125617981,
-0.7539591789245605,
1.4164730310440063,
-0.1072247251868248,
-1.2842718362808228,
0.1016727089881897,
-1.0471786260604858,
1.6512656211853027,
-0.7368374466896057,
0.2724594175815582,
-1.728262186050415,
-0.04369534179568291,
-0.5601991415023804,
... |
https://github.com/huggingface/datasets/issues/5613 | Version mismatch with multiprocess and dill on Python 3.10 | Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says
> Datasets is tested on Python 3.7+.
but it should probably say that Beam Datasets do not work with Python 3.10 (or link to a known issues page). | ### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/... | 267 | 46 | Version mismatch with multiprocess and dill on Python 3.10
### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>... | [
-1.2203803062438965,
-0.8654314875602722,
-0.62025386095047,
1.36741042137146,
-0.11054454743862152,
-1.3439956903457642,
0.06198142096400261,
-1.014704942703247,
1.522252082824707,
-0.6576555967330933,
0.19666792452335358,
-1.6717911958694458,
-0.2024473398923874,
-0.36462289094924927,
... |
https://github.com/huggingface/datasets/issues/5613 | Version mismatch with multiprocess and dill on Python 3.10 | Same problem on Colab using a vanilla setup running :
Python 3.10.11
apache-beam 2.47.0
datasets 2.12.0 | ### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>
import datasets
File "/Users/adpauls/Library/Caches/... | 267 | 16 | Version mismatch with multiprocess and dill on Python 3.10
### Describe the bug
Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is
```
File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module>... | [
-1.2203803062438965,
-0.8654314875602722,
-0.62025386095047,
1.36741042137146,
-0.11054454743862152,
-1.3439956903457642,
0.06198142096400261,
-1.014704942703247,
1.522252082824707,
-0.6576555967330933,
0.19666792452335358,
-1.6717911958694458,
-0.2024473398923874,
-0.36462289094924927,
... |
https://github.com/huggingface/datasets/issues/5612 | Arrow map type in parquet files unsupported | I'm attaching a minimal reproducible example:
```python
from datasets import load_dataset
import pyarrow as pa
import pyarrow.parquet as pq
table_with_map = pa.Table.from_pydict(
{"a": [1, 2], "b": [[("a", 2)], [("b", 4)]]},
schema=pa.schema({"a": pa.int32(), "b": pa.map_(pa.string(), pa.int32())})
)
... | ### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce ... | 268 | 94 | Arrow map type in parquet files unsupported
### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=Tr... | [
-1.2015632390975952,
-0.8567219972610474,
-0.6773289442062378,
1.4378821849822998,
-0.18681499361991882,
-1.2755796909332275,
0.20724272727966309,
-1.0972925424575806,
1.644755244255066,
-0.8304726481437683,
0.3374823331832886,
-1.66536283493042,
0.08009473234415054,
-0.5665971636772156,
... |
https://github.com/huggingface/datasets/issues/5610 | use datasets streaming mode in trainer ddp mode cause memory leak | Same problem,
transformers 4.28.1
datasets 2.12.0
leak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(
| ### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, Sequenti... | 269 | 34 | use datasets streaming mode in trainer ddp mode cause memory leak
### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.da... | [
-1.3668327331542969,
-1.0391124486923218,
-0.6425808668136597,
1.5796233415603638,
-0.1771867275238037,
-1.1103981733322144,
0.11056772619485855,
-1.0720235109329224,
1.5092308521270752,
-0.8274769186973572,
0.2660912275314331,
-1.649958610534668,
-0.04564975947141647,
-0.5367396473884583,... |
https://github.com/huggingface/datasets/issues/5610 | use datasets streaming mode in trainer ddp mode cause memory leak | found an article described a problem, may be helpful for somebody:
https://ppwwyyxx.com/blog/2022/Demystify-RAM-Usage-in-Multiprocess-DataLoader/
I confirm, it`s not memory leak, after some time memory growing has stopped | ### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, Sequenti... | 269 | 25 | use datasets streaming mode in trainer ddp mode cause memory leak
### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.da... | [
-1.3668327331542969,
-1.0391124486923218,
-0.6425808668136597,
1.5796233415603638,
-0.1771867275238037,
-1.1103981733322144,
0.11056772619485855,
-1.0720235109329224,
1.5092308521270752,
-0.8274769186973572,
0.2660912275314331,
-1.649958610534668,
-0.04564975947141647,
-0.5367396473884583,... |
https://github.com/huggingface/datasets/issues/5609 | `load_from_disk` vs `load_dataset` performance. | Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the "benchmark". | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_di... | 270 | 32 | `load_from_disk` vs `load_dataset` performance.
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache m... | [
-1.1461586952209473,
-0.9332696795463562,
-0.7413451075553894,
1.4626514911651611,
-0.14524132013320923,
-1.210001826286316,
0.1552080363035202,
-1.0253177881240845,
1.6445262432098389,
-0.8235805630683899,
0.35147538781166077,
-1.6538468599319458,
0.05077756941318512,
-0.6181231141090393,... |
https://github.com/huggingface/datasets/issues/5609 | `load_from_disk` vs `load_dataset` performance. | @mariosasko is that fix released to pip in the meantime? Asking cause im facing still the same issue (regarding loading images from local paths):
```
dataset = load_dataset("csv", cache_dir="cache", data_files=["/STORAGE/DATA/mijam/vit/code/list_filtered.csv"], num_proc=16, split="train").cast_column("image", Image()... | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_di... | 270 | 71 | `load_from_disk` vs `load_dataset` performance.
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache m... | [
-1.1504758596420288,
-0.9288638830184937,
-0.7494857311248779,
1.460647702217102,
-0.14338603615760803,
-1.2090320587158203,
0.14476484060287476,
-1.0342321395874023,
1.668833613395691,
-0.8153769373893738,
0.33879122138023376,
-1.6526466608047485,
0.06922627985477448,
-0.6100777983665466,... |
https://github.com/huggingface/datasets/issues/5609 | `load_from_disk` vs `load_dataset` performance. | @mjamroz I assume your CSV file stores image file paths. This means `save_to_disk` needs to embed the image bytes resulting in a much bigger Arrow file (than the initial one). Maybe specifying `num_shards` to make the Arrow files smaller can help (large Arrow files on some systems take a long time to load). | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_di... | 270 | 53 | `load_from_disk` vs `load_dataset` performance.
### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache m... | [
-1.1552221775054932,
-0.9428613781929016,
-0.7493132948875427,
1.4506059885025024,
-0.15418574213981628,
-1.2171801328659058,
0.1420518457889557,
-1.0230222940444946,
1.663267731666565,
-0.8269370198249817,
0.33807772397994995,
-1.652201771736145,
0.06728407740592957,
-0.6163331270217896,
... |
https://github.com/huggingface/datasets/issues/5608 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files. | Hi!
> naming convention of mp3 files
Yes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.
If the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree "x"` command)? | ### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the b... | 271 | 54 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Do... | [
-1.1904871463775635,
-0.9062251448631287,
-0.7016420364379883,
1.4522279500961304,
-0.2260921150445938,
-1.0890953540802002,
0.19326183199882507,
-0.9768805503845215,
1.6433956623077393,
-0.8800054788589478,
0.2999725043773651,
-1.7013241052627563,
0.052340276539325714,
-0.4880883395671844... |
https://github.com/huggingface/datasets/issues/5608 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files. | Hi! I'm sorry, I don't want to reveal my entire dataset, but here's a snippet (all of the mp3 files below are some of the ones not being recognized by audiofolder. Also, for another dataset, audiofolder loaded zero mp3 files because "train" was in the name of one of the mp3 files.
my_dataset
├── data
│ ├── VHA_In... | ### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.)
### Steps to reproduce the b... | 271 | 94 | audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
### Describe the bug
x = load_dataset("audiofolder", data_dir="x")
When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Do... | [
-1.209779143333435,
-0.905281662940979,
-0.6140527129173279,
1.4679198265075684,
-0.18320265412330627,
-1.2238426208496094,
0.2682182490825653,
-0.9476800560951233,
1.6219302415847778,
-0.915643572807312,
0.2681034803390503,
-1.6018048524856567,
0.08765868842601776,
-0.5242804288864136,
... |
https://github.com/huggingface/datasets/issues/5606 | Add `Dataset.to_list` to the API | Hello, I have an interest in this issue.
Is the `Dataset.to_dict` you are describing correct in the code here?
https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667 | Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`. | 272 | 20 | Add `Dataset.to_list` to the API
Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent.
Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`.
Hello, I have an interest in this issue.
Is the `Datase... | [
-1.1085078716278076,
-0.7848642468452454,
-0.7754935622215271,
1.4184117317199707,
-0.15377268195152283,
-1.3009488582611084,
0.2207460254430771,
-1.2125113010406494,
1.70610511302948,
-0.8192591071128845,
0.3086867928504944,
-1.6503373384475708,
-0.025359585881233215,
-0.5635589361190796,... |
https://github.com/huggingface/datasets/issues/5604 | Problems with downloading The Pile | Hi!
You can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:
```python
from datasets import load_dataset, DownloadConfig
dataset = load_dataset('the_pile', split='train', cache_dir='F:\datasets', download_... | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

Here are the down... | [
-1.2368625402450562,
-0.8774968981742859,
-0.7400590777397156,
1.444351077079773,
-0.1186927780508995,
-1.2102718353271484,
0.055873047560453415,
-0.9798559546470642,
1.5522856712341309,
-0.7297624349594116,
0.2441541701555252,
-1.6731832027435303,
-0.014697623439133167,
-0.562451303005218... |
https://github.com/huggingface/datasets/issues/5604 | Problems with downloading The Pile | @mariosasko , I used your suggestion but its not saving anything , just stops and runs from the same point .
below is the script to download and save on disk .
```
from datasets import load_dataset, DownloadConfig
#load the Pile dataset from Hugging Face Datasets
#dataset = load_dataset('the_pile')
dataset ... | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

Here are the down... | [
-1.241477370262146,
-0.9204444885253906,
-0.7658193707466125,
1.4308555126190186,
-0.17130808532238007,
-1.1928671598434448,
0.06701197475194931,
-0.9976588487625122,
1.5688680410385132,
-0.7707803845405579,
0.1920558363199234,
-1.6609686613082886,
-0.01000867411494255,
-0.5565441846847534... |
https://github.com/huggingface/datasets/issues/5604 | Problems with downloading The Pile | @mariosasko , it shows nothing in dataset folder
```
du -sh /mnt/nlp/hugging_face/*
20K /mnt/nlp/hugging_face/datasets
4.0K /mnt/nlp/hugging_face/download_pile.py
```
| ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

Here are the down... | [
-1.197818636894226,
-0.8574449419975281,
-0.8052145838737488,
1.445061206817627,
-0.07909907400608063,
-1.2725484371185303,
0.01774512231349945,
-0.933093786239624,
1.5372090339660645,
-0.7141402959823608,
0.27452340722084045,
-1.6517730951309204,
0.019322596490383148,
-0.5421893000602722,... |
https://github.com/huggingface/datasets/issues/5604 | Problems with downloading The Pile | @mariosasko
```
root@d20f0ab8f4f8:/mnt/hugging_face# python3 download_pile.py
No config specified, defaulting to: the_pile/all
Downloading and preparing dataset the_pile/all to /mnt/hugging_face/datasets/the_pile/all/0.0.0/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349...
Downloading data file... | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

Here are the down... | [
-1.203966736793518,
-0.8519333004951477,
-0.7993110418319702,
1.381014108657837,
-0.12639036774635315,
-1.2334026098251343,
0.05667369067668915,
-1.0094321966171265,
1.5582407712936401,
-0.709533154964447,
0.2136765867471695,
-1.6297366619110107,
0.017676159739494324,
-0.5309671759605408,
... |
https://github.com/huggingface/datasets/issues/5604 | Problems with downloading The Pile | Users with slow internet speed are doomed (4MB/s). The dataset downloads fine at minimum speed 10MB/s.
Also, when the train splits were generated and then I removed the downloads folder to save up disk space, it started redownloading the whole dataset. Is there any way to use the already generated splits instead? | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

Here are the down... | [
-1.2314949035644531,
-0.8574203848838806,
-0.8153257966041565,
1.4107085466384888,
-0.10294856131076813,
-1.2306573390960693,
0.03640240430831909,
-0.9805265069007874,
1.5492911338806152,
-0.7054902911186218,
0.23757299780845642,
-1.63870370388031,
0.027599677443504333,
-0.5107318758964539... |
https://github.com/huggingface/datasets/issues/5604 | Problems with downloading The Pile | @sentialx @mariosasko , anytime on my above script , am I downloading and saving dataset correctly . Please suggest :) | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.

Here are the downloaded files:

Here are the down... | [
-1.2404416799545288,
-0.8615272045135498,
-0.7928768396377563,
1.4353107213974,
-0.13145187497138977,
-1.2064193487167358,
0.03153708204627037,
-0.9666385650634766,
1.5105797052383423,
-0.7303228378295898,
0.2456180900335312,
-1.6523255109786987,
0.008035275153815746,
-0.5417928099632263,
... |
https://github.com/huggingface/datasets/issues/5601 | Authorization error | Hi!
It's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there. | ### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingfa... | 274 | 27 | Authorization error
### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git c... | [
-1.1315078735351562,
-0.8858757615089417,
-0.7515595555305481,
1.4895168542861938,
-0.06980527192354202,
-1.3011276721954346,
0.1188325583934784,
-0.9685966968536377,
1.7310949563980103,
-0.6481048464775085,
0.2915382981300354,
-1.7327780723571777,
-0.05241468548774719,
-0.6085132360458374... |
https://github.com/huggingface/datasets/issues/5601 | Authorization error | Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo contain other username. When I changed username in keychain - it works now. | ### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git clone https://huggingfa... | 274 | 36 | Authorization error
### Describe the bug
Get `Authorization error` when try to push data into hugginface datasets hub.
### Steps to reproduce the bug
I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share),
1. `huggingface-cli login` with WRITE token
2. `git lfs install`
3. `git c... | [
-1.1269416809082031,
-0.9064958095550537,
-0.7256932258605957,
1.4935580492019653,
-0.07282377034425735,
-1.3066577911376953,
0.13756318390369415,
-0.9786835312843323,
1.73699951171875,
-0.6406669616699219,
0.29947489500045776,
-1.7377969026565552,
-0.02259102649986744,
-0.5937290787696838... |
https://github.com/huggingface/datasets/issues/5600 | Dataloader getitem not working for DreamboothDatasets | Hi!
> (see example of DreamboothDatasets)
Could you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data. | ### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### Steps to reproduce the bug
1- using DreamBoothDataset ... | 275 | 41 | Dataloader getitem not working for DreamboothDatasets
### Describe the bug
Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529))
moving Datasets to 2.8.0 solved the issue.
### S... | [
-1.2297148704528809,
-0.8961120843887329,
-0.8456498980522156,
1.4584673643112183,
-0.1974523812532425,
-1.2620317935943604,
0.039972636848688126,
-1.0766230821609497,
1.6056658029556274,
-0.81531822681427,
0.31427645683288574,
-1.7112020254135132,
0.03862027823925018,
-0.5696146488189697,... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.
In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.
Note that datasets loaded from disk (memory mapped) are not loaded in memory,... | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 276 | 63 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... | [
-1.1607862710952759,
-0.9026415944099426,
-0.7501826286315918,
1.5338371992111206,
-0.19454406201839447,
-1.325056791305542,
0.15530814230442047,
-1.088658094406128,
1.8104037046432495,
-0.7365054488182068,
0.24363689124584198,
-1.7707486152648926,
-0.06863647699356079,
-0.657507061958313,... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | Thank you for your detailed reply.
> In your case the old dataset is garbage collected pretty quickly so you won't have memory issues.
I understand this, but it still copies the old dataset to create the new one, is this correct? So maybe it is not memory-consuming, but time-consuming? | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 276 | 50 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... | [
-1.1633410453796387,
-0.9036456942558289,
-0.7431443333625793,
1.5265822410583496,
-0.2008284479379654,
-1.3133312463760376,
0.15216724574565887,
-1.095810055732727,
1.8013856410980225,
-0.7358940839767456,
0.2409515380859375,
-1.7724872827529907,
-0.0718618705868721,
-0.6564115285873413,
... |
https://github.com/huggingface/datasets/issues/5597 | in-place dataset update | Indeed, and because of that it is more efficient to add multiple rows at once instead of one by one, using `concatenate_datasets` for example. | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Datas... | 276 | 24 | in-place dataset update
### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets im... | [
-1.1636685132980347,
-0.9097646474838257,
-0.754844069480896,
1.5443434715270996,
-0.1957906037569046,
-1.3152470588684082,
0.1540081799030304,
-1.0803899765014648,
1.8026745319366455,
-0.7383818030357361,
0.2480280101299286,
-1.7666202783584595,
-0.06874053180217743,
-0.6501613259315491,
... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | Apparently some JSON objects have a `"labels"` field. Since this field is not present in every object, you must specify all the fields types in the README.md
EDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 277 | 48 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... | [
-1.2122060060501099,
-1.0282363891601562,
-0.7827973365783691,
1.5735336542129517,
-0.23098750412464142,
-1.0753028392791748,
0.11346933990716934,
-1.002651333808899,
1.5741065740585327,
-0.636767566204071,
0.25753581523895264,
-1.6488094329833984,
-0.03603915497660637,
-0.7343500256538391... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | We've updated the dataset to remove the extra `labels` field from some files, closing this issue. Thanks! | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 277 | 17 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... | [
-1.2122060060501099,
-1.0282363891601562,
-0.7827973365783691,
1.5735336542129517,
-0.23098750412464142,
-1.0753028392791748,
0.11346933990716934,
-1.002651333808899,
1.5741065740585327,
-0.636767566204071,
0.25753581523895264,
-1.6488094329833984,
-0.03603915497660637,
-0.7343500256538391... |
https://github.com/huggingface/datasets/issues/5596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | A similar error occurs in the Pile dataset (EleutherAI/the_pile)
Loading the dataset produces the following error.
```
TypeError: Couldn't cast array of type
struct<file: string, id: string>
to
{'id': Value(dtype='string', id=None)}
```
| ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr... | 277 | 32 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset
### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
... | [
-1.2122060060501099,
-1.0282363891601562,
-0.7827973365783691,
1.5735336542129517,
-0.23098750412464142,
-1.0753028392791748,
0.11346933990716934,
-1.002651333808899,
1.5741065740585327,
-0.636767566204071,
0.25753581523895264,
-1.6488094329833984,
-0.03603915497660637,
-0.7343500256538391... |
https://github.com/huggingface/datasets/issues/5594 | Error while downloading the xtreme udpos dataset | Hi! I cannot reproduce this error on my machine.
The raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:
```python
train_dataset = load_dataset('xtreme', 'udpos.English', split="train", cache_dir=args.cache_dir, download_mode... | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | 278 | 45 | Error while downloading the xtreme udpos dataset
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtre... | [
-1.144119143486023,
-0.8995024561882019,
-0.7754905819892883,
1.2917661666870117,
-0.10625427961349487,
-1.2262579202651978,
0.11179803311824799,
-1.0710818767547607,
1.5105105638504028,
-0.6548566818237305,
0.1799335777759552,
-1.678903341293335,
-0.07483324408531189,
-0.5819866061210632,... |
https://github.com/huggingface/datasets/issues/5594 | Error while downloading the xtreme udpos dataset | Hi! Apologies for the delayed response! I tried the above and it doesn't solve the issue. Actually, the dataset gets downloaded most times, but sometimes this error occurs (at random afaik). Is it possible that there is a server issue for this particular dataset? I am able to download other datasets using the same code... | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | 278 | 158 | Error while downloading the xtreme udpos dataset
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtre... | [
-1.144119143486023,
-0.8995024561882019,
-0.7754905819892883,
1.2917661666870117,
-0.10625427961349487,
-1.2262579202651978,
0.11179803311824799,
-1.0710818767547607,
1.5105105638504028,
-0.6548566818237305,
0.1799335777759552,
-1.678903341293335,
-0.07483324408531189,
-0.5819866061210632,... |
https://github.com/huggingface/datasets/issues/5594 | Error while downloading the xtreme udpos dataset | If this happens randomly, then this means the data file from the error message is not always downloaded correctly.
The only solution in this scenario is to download the dataset again by passing `download_mode="force_redownload"` to the `load_dataset` call. | ### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1... | 278 | 38 | Error while downloading the xtreme udpos dataset
### Describe the bug
Hi,
I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed
```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtre... | [
-1.144119143486023,
-0.8995024561882019,
-0.7754905819892883,
1.2917661666870117,
-0.10625427961349487,
-1.2262579202651978,
0.11179803311824799,
-1.0710818767547607,
1.5105105638504028,
-0.6548566818237305,
0.1799335777759552,
-1.678903341293335,
-0.07483324408531189,
-0.5819866061210632,... |
https://github.com/huggingface/datasets/issues/5586 | .sort() is broken when used after .filter(), only in 2.10.0 | Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix | ### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError.
... | 279 | 19 | .sort() is broken when used after .filter(), only in 2.10.0
### Describe the bug
Hi, thank you for your support!
It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method.
After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of t... | [
-1.2049190998077393,
-0.9679717421531677,
-0.6971979737281799,
1.3593785762786865,
-0.16691051423549652,
-1.2836297750473022,
0.1095959022641182,
-1.0390392541885376,
1.6087050437927246,
-0.7305065989494324,
0.16934888064861298,
-1.7022000551223755,
-0.15895113348960876,
-0.510460615158081... |
https://github.com/huggingface/datasets/issues/5585 | Cache is not transportable | Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.
In particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hash... | ### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads.
I... | 280 | 85 | Cache is not transportable
### Describe the bug
I would like to share cache between two machines (a Windows host machine and a WSL instance).
I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereb... | [
-1.1485437154769897,
-0.8953870534896851,
-0.7388613820075989,
1.4046502113342285,
-0.17850692570209503,
-1.3018282651901245,
0.14802056550979614,
-1.050010323524475,
1.6973915100097656,
-0.7715612649917603,
0.29215511679649353,
-1.6193428039550781,
0.05268409848213196,
-0.5444180965423584... |
https://github.com/huggingface/datasets/issues/5584 | Unable to load coyo700M dataset | Hi @manuaero
Thank you for your interest in the COYO dataset.
Our dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.
We provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README.md) to download,... | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coy... | 281 | 49 | Unable to load coyo700M dataset
### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and prepari... | [
-1.137178897857666,
-0.7717975378036499,
-0.6077971458435059,
1.4108400344848633,
0.042310185730457306,
-1.4374611377716064,
0.10425188392400742,
-0.9388829469680786,
1.5090742111206055,
-0.7684450149536133,
0.43047839403152466,
-1.666019320487976,
0.055913787335157394,
-0.5954462289810181... |
https://github.com/huggingface/datasets/issues/5577 | Cannot load `the_pile_openwebtext2` | Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.
| ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load... | 283 | 18 | Cannot load `the_pile_openwebtext2`
### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
... | [
-1.129634976387024,
-0.756882905960083,
-0.7154749631881714,
1.4816962480545044,
-0.15962091088294983,
-1.3900498151779175,
0.138652965426445,
-1.0247220993041992,
1.6827644109725952,
-0.851656973361969,
0.3420817255973816,
-1.620510458946228,
0.028910677880048752,
-0.6427130699157715,
-... |
https://github.com/huggingface/datasets/issues/5575 | Metadata for each column | Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:
```python
col_feature = Value("string", metadata="Some column-level metadata")
features = Features({"col": c... | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | 285 | 48 | Metadata for each column
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of pre... | [
-1.185550332069397,
-0.8893149495124817,
-0.9054270386695862,
1.6193114519119263,
-0.257426381111145,
-1.277082920074463,
0.1428319215774536,
-1.076081395149231,
1.5800749063491821,
-0.9815604090690613,
0.3370456099510193,
-1.6183128356933594,
0.16459067165851593,
-0.6849847435951233,
-0... |
https://github.com/huggingface/datasets/issues/5575 | Metadata for each column | Sorry for the late reply,
Yes, I think this is the most straight-forward approach with the things that we already have.
| ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on... | 285 | 21 | Metadata for each column
### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of pre... | [
-1.2464033365249634,
-0.9257323145866394,
-0.9485728740692139,
1.5743310451507568,
-0.27065420150756836,
-1.2942638397216797,
0.12672464549541473,
-1.0613806247711182,
1.598240613937378,
-1.0056735277175903,
0.3065873086452484,
-1.6189723014831543,
0.1597696840763092,
-0.6239144206047058,
... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | Also encountering this issue for every dataset I try to stream! Installed datasets from main:
```
- `datasets` version: 2.10.1.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.13
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
```
Repro:
```python
from datasets import load_dataset
spig... | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 286 | 655 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... | [
-1.18198561668396,
-0.8671308755874634,
-0.7479708194732666,
1.3915537595748901,
-0.18342715501785278,
-1.162192463874817,
0.1917634755373001,
-1.0304006338119507,
1.6828891038894653,
-0.7545842528343201,
0.3049739897251129,
-1.6680021286010742,
-0.0791458785533905,
-0.578832745552063,
-... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | This problem now appears again, this time with an underlying HTTP 502 status code:
```
aiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')
``` | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 286 | 21 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... | [
-1.18198561668396,
-0.8671308755874634,
-0.7479708194732666,
1.3915537595748901,
-0.18342715501785278,
-1.162192463874817,
0.1917634755373001,
-1.0304006338119507,
1.6828891038894653,
-0.7545842528343201,
0.3049739897251129,
-1.6680021286010742,
-0.0791458785533905,
-0.578832745552063,
-... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-cont... | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 286 | 22 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... | [
-1.18198561668396,
-0.8671308755874634,
-0.7479708194732666,
1.3915537595748901,
-0.18342715501785278,
-1.162192463874817,
0.1917634755373001,
-1.0304006338119507,
1.6828891038894653,
-0.7545842528343201,
0.3049739897251129,
-1.6680021286010742,
-0.0791458785533905,
-0.578832745552063,
-... |
https://github.com/huggingface/datasets/issues/5574 | c4 dataset streaming fails with `FileNotFoundError` | > It's been resolved again ;)
I'm experiencing the same issue when trying to load this dataset, `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/realnewslike/c4-train.00000-of-00512.json.gz` | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", ... | 286 | 19 | c4 dataset streaming fails with `FileNotFoundError`
### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset... | [
-1.18198561668396,
-0.8671308755874634,
-0.7479708194732666,
1.3915537595748901,
-0.18342715501785278,
-1.162192463874817,
0.1917634755373001,
-1.0304006338119507,
1.6828891038894653,
-0.7545842528343201,
0.3049739897251129,
-1.6680021286010742,
-0.0791458785533905,
-0.578832745552063,
-... |
https://github.com/huggingface/datasets/issues/5571 | load_dataset fails for JSON in windows | Hi!
You need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:
```python
ds = load_dataset("json", data_files=args.input_json)
```
| ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is di... | 287 | 24 | load_dataset fails for JSON in windows
### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local Py... | [
-1.2623194456100464,
-0.9738195538520813,
-0.7303214073181152,
1.5336825847625732,
-0.2054997831583023,
-1.2278145551681519,
0.10890890657901764,
-1.0818012952804565,
1.7445775270462036,
-0.730962336063385,
0.21932944655418396,
-1.6172125339508057,
0.050212424248456955,
-0.5594354867935181... |
https://github.com/huggingface/datasets/issues/5570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it? | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | 288 | 29 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once acce... | [
-1.2081549167633057,
-1.0083028078079224,
-0.8004522323608398,
1.5000627040863037,
-0.10674864053726196,
-1.3200321197509766,
0.05623435229063034,
-1.0695385932922363,
1.6160078048706055,
-0.853119969367981,
0.3403680622577667,
-1.7128607034683228,
-0.03753047436475754,
-0.6481027007102966... |
https://github.com/huggingface/datasets/issues/5570 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub | The error is now more informative:
```
FileNotFoundError: Couldn't find a dataset script at /content/imagenet-1k/imagenet-1k.py or any data file in the same directory. Couldn't find 'imagenet-1k' on the Hugging Face Hub either: FileNotFoundError: Dataset 'imagenet-1k' doesn't exist on the Hub. If the repo is private ... | ### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting.
### Steps to reproduce the bug
```
from datasets import load_dataset
imagenet =... | 288 | 56 | load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
### Describe the bug
When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once acce... | [
-1.170973539352417,
-1.051756501197815,
-0.8071074485778809,
1.500848412513733,
-0.10347427427768707,
-1.3066271543502808,
0.05303516238927841,
-1.0711501836776733,
1.6302049160003662,
-0.8009863495826721,
0.3466123938560486,
-1.7298086881637573,
-0.04814952239394188,
-0.6833724975585938,
... |
https://github.com/huggingface/datasets/issues/5568 | dataset.to_iterable_dataset() loses useful info like dataset features | Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.
Setting this as a good first issue if someone would like to contribute, otherwise we can take care of it :) | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | 289 | 38 | dataset.to_iterable_dataset() loses useful info like dataset features
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata l... | [
-1.2462466955184937,
-1.0350251197814941,
-0.749411404132843,
1.6629140377044678,
-0.19097687304019928,
-1.1181706190109253,
0.13851626217365265,
-1.0018445253372192,
1.6432448625564575,
-0.7175230383872986,
0.27209797501564026,
-1.637208104133606,
0.03228255361318588,
-0.643389880657196,
... |
https://github.com/huggingface/datasets/issues/5568 | dataset.to_iterable_dataset() loses useful info like dataset features | seems like the feature parameter is missing from `return IterableDataset.from_generator(Dataset._iter_shards, gen_kwargs={"shards": shards})` hence it defaults to None. | ### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features.
These metadata are useful if you want to interleav... | 289 | 17 | dataset.to_iterable_dataset() loses useful info like dataset features
### Describe the bug
Hello,
I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing.
When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata l... | [
-1.2547454833984375,
-1.0431885719299316,
-0.7598384022712708,
1.689414143562317,
-0.21614834666252136,
-1.1073180437088013,
0.13116535544395447,
-1.0087391138076782,
1.6181259155273438,
-0.7153283357620239,
0.2624534070491791,
-1.636167049407959,
0.005314767360687256,
-0.6305069923400879,... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | Hi ! The indices mapping is written in the same cachedirectory as your dataset.
Can you run this to show your current cache directory ?
```python
print(train_dataset.cache_files)
``` | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 291 | 28 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... | [
-1.1844799518585205,
-0.8249237537384033,
-0.5923757553100586,
1.3262419700622559,
-0.041086725890636444,
-1.4175077676773071,
0.15833747386932373,
-0.9759659767150879,
1.5685051679611206,
-0.851817786693573,
0.36299389600753784,
-1.6840460300445557,
0.07002703845500946,
-0.677879512310028... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | ```
[{'filename': '.../train/dataset.arrow'}, {'filename': '.../train/dataset.arrow'}]
```
These are the actual paths where `.hf` files are stored. | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 291 | 16 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... | [
-1.1844799518585205,
-0.8249237537384033,
-0.5923757553100586,
1.3262419700622559,
-0.041086725890636444,
-1.4175077676773071,
0.15833747386932373,
-0.9759659767150879,
1.5685051679611206,
-0.851817786693573,
0.36299389600753784,
-1.6840460300445557,
0.07002703845500946,
-0.677879512310028... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | I'm not aware of any `.hf` file ? What are you referring to ?
Also the error says "Protocol unknown: parent". Is there a chance you may have ended up with a path that contains this string `parent://` ? | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 291 | 39 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... | [
-1.1844799518585205,
-0.8249237537384033,
-0.5923757553100586,
1.3262419700622559,
-0.041086725890636444,
-1.4175077676773071,
0.15833747386932373,
-0.9759659767150879,
1.5685051679611206,
-0.851817786693573,
0.36299389600753784,
-1.6840460300445557,
0.07002703845500946,
-0.677879512310028... |
https://github.com/huggingface/datasets/issues/5555 | `.shuffle` throwing error `ValueError: Protocol not known: parent` | I figured out why the issue was occuring but don't know the long-term fix.
The dataset I was trying to shuffle was loaded from a saved file which had `::` delimiter in filename. When I try with the exact same file without `::` in filename, it works as expected.
Quick fix is to not use colons in filename. But if this ... | ### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle()
File /opt/conda/envs/pytorch/lib/python3.9/site-packages/dataset... | 291 | 76 | `.shuffle` throwing error `ValueError: Protocol not known: parent`
### Describe the bug
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In [16], line 1
----> 1 train_dataset = train_dataset.shuffle(... | [
-1.1844799518585205,
-0.8249237537384033,
-0.5923757553100586,
1.3262419700622559,
-0.041086725890636444,
-1.4175077676773071,
0.15833747386932373,
-0.9759659767150879,
1.5685051679611206,
-0.851817786693573,
0.36299389600753784,
-1.6840460300445557,
0.07002703845500946,
-0.677879512310028... |
https://github.com/huggingface/datasets/issues/5546 | Downloaded datasets do not cache at $HF_HOME | Hi ! Can you make sure you set `HF_HOME` before importing `datasets` ?
Then you can print
```python
print(datasets.config.HF_CACHE_HOME)
print(datasets.config.HF_DATASETS_CACHE)
``` | ### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at HF_HOME but this is not the case for datasets, t... | 292 | 21 | Downloaded datasets do not cache at $HF_HOME
### Describe the bug
In the huggingface course (https://huggingface.co/course/chapter3/2?fw=pt) it said that if we set HF_HOME, downloaded datasets would be cached at specified address but it does not. downloaded models from checkpoint names are downloaded and cached at H... | [
-1.073030710220337,
-0.9087356328964233,
-0.6936896443367004,
1.4957488775253296,
-0.1721569001674652,
-1.316701054573059,
0.15854528546333313,
-1.050142526626587,
1.7263950109481812,
-0.8836342096328735,
0.23554116487503052,
-1.7613087892532349,
-0.05776538699865341,
-0.5205405354499817,
... |
https://github.com/huggingface/datasets/issues/5543 | the pile datasets url seems to change back | Thanks for reporting, @wjfwzzc.
I am transferring this issue to the corresponding dataset on the Hub: https://huggingface.co/datasets/bookcorpusopen/discussions/1 | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | 293 | 17 | the pile datasets url seems to change back
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_datase... | [
-1.150075912475586,
-0.878979504108429,
-0.7697440385818481,
1.5033409595489502,
-0.10427336394786835,
-1.226611852645874,
0.10909775644540787,
-0.9541971683502197,
1.6623444557189941,
-0.7431823015213013,
0.31688979268074036,
-1.6592267751693726,
0.01862042024731636,
-0.5786975026130676,
... |
https://github.com/huggingface/datasets/issues/5543 | the pile datasets url seems to change back | Thank you. All fixes are done:
- [x] https://huggingface.co/datasets/bookcorpusopen/discussions/2
- [x] https://huggingface.co/datasets/the_pile/discussions/1
- [x] https://huggingface.co/datasets/the_pile_books3/discussions/1
- [x] https://huggingface.co/datasets/the_pile_openwebtext2/discussions/2
- [x] https://... | ### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("bookcorpusopen")
```
shows
```python3
... | 293 | 21 | the pile datasets url seems to change back
### Describe the bug
in #3627, the host url of the pile dataset became `https://mystic.the-eye.eu`. Now the new url is broken, but `https://the-eye.eu` seems to work again.
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_datase... | [
-1.145060420036316,
-0.8541693091392517,
-0.806215763092041,
1.494767665863037,
-0.08526314049959183,
-1.2706862688064575,
0.09951090812683105,
-0.9158296585083008,
1.6551194190979004,
-0.724471926689148,
0.29737499356269836,
-1.673732876777649,
-0.029381105676293373,
-0.6031084060668945,
... |
https://github.com/huggingface/datasets/issues/5541 | Flattening indices in selected datasets is extremely inefficient | Running the script above on the branch https://github.com/huggingface/datasets/pull/5542 results in the expected behaviour:
```
Num chunks for original ds: 1
Original ds save/load
save_to_disk -- RAM memory used: 0.671875 MB -- Total time: 0.255265 s
load_from_disk -- RAM memory used: 42.796875 MB -- Total time: 0... | ### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat dataset contains ChunkedArrays with as many chunks as there are rows. Thi... | 294 | 117 | Flattening indices in selected datasets is extremely inefficient
### Describe the bug
If we perform a `select` (or `shuffle`, `train_test_split`, etc.) operation on a dataset , we end up with a dataset with an `indices_table`. Currently, flattening such dataset consumes a lot of memory and the resulting flat datase... | [
-1.4144846200942993,
-0.9966012239456177,
-0.6537461876869202,
1.38670015335083,
-0.2331400364637375,
-1.186317801475525,
0.13665515184402466,
-1.0323795080184937,
1.6344138383865356,
-0.7777537107467651,
0.33987748622894287,
-1.6300874948501587,
0.049949634820222855,
-0.6063641309738159,
... |
https://github.com/huggingface/datasets/issues/5539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:
```python
from datasets import load_dataset
import torch
dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='train')
def t(bat... | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | 295 | 78 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib... | [
-1.2230236530303955,
-0.9111908674240112,
-0.6155933737754822,
1.4079370498657227,
-0.12640199065208435,
-1.3362057209014893,
0.1638043075799942,
-1.0651549100875854,
1.7161568403244019,
-0.7853018641471863,
0.2807149887084961,
-1.6716252565383911,
0.027124546468257904,
-0.5514797568321228... |
https://github.com/huggingface/datasets/issues/5539 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number | > Hi! The `set_transform` does not apply a custom formatting transform on a single example but the entire batch, so the fixed version of your transform would look as follows:
>
> ```python
> from datasets import load_dataset
> import torch
>
> dataset = load_dataset("lambdalabs/pokemon-blip-captions", split='tr... | ### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib/python3.8/site-packages/datasets/formatting/formatting.py", line 501, in format_row
return _unnest(formatted_batch)
File "<path>/lib/py... | 295 | 104 | IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
### Describe the bug
When dataset contains a 0-dim tensor, formatting.py raises a following error and fails.
```bash
Traceback (most recent call last):
File "<path>/lib... | [
-1.2193645238876343,
-0.9118183255195618,
-0.619399905204773,
1.408189058303833,
-0.13033972680568695,
-1.3387686014175415,
0.16156235337257385,
-1.0673973560333252,
1.7126332521438599,
-0.7874516844749451,
0.2819618284702301,
-1.6725255250930786,
0.026825476437807083,
-0.5507980585098267,... |
https://github.com/huggingface/datasets/issues/5538 | load_dataset in seaborn is not working for me. getting this error. | Hi! `seaborn`'s `load_dataset` pulls datasets from [here](https://github.com/mwaskom/seaborn-data) and not from our Hub, so this issue is not related to our library in any way and should be reported in their repo instead. | TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selector, req.data, headers,
1347 encode_chu... | 296 | 32 | load_dataset in seaborn is not working for me. getting this error.
TimeoutError Traceback (most recent call last)
~\anaconda3\lib\urllib\request.py in do_open(self, http_class, req, **http_conn_args)
1345 try:
-> 1346 h.request(req.get_method(), req.selec... | [
-1.3493822813034058,
-0.9833536744117737,
-0.6325235962867737,
1.399228572845459,
-0.21243597567081451,
-1.2577965259552002,
0.1318170130252838,
-1.091442346572876,
1.6390981674194336,
-0.8145342469215393,
0.25954797863960266,
-1.7373191118240356,
0.12101535499095917,
-0.5672935843467712,
... |
https://github.com/huggingface/datasets/issues/5537 | Increase speed of data files resolution | You were right, if `self.dir_cache` is not None in glob, it is exactly the same as what is returned by find, at least for all the tests we have, and some extended evaluation I did across a random sample of about 1000 datasets.
Thanks for the nice hints, and let me know if this is not exactly what we want here!
s... | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `res... | 297 | 64 | Increase speed of data files resolution
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on a... | [
-1.1481151580810547,
-0.8770433664321899,
-0.7696927785873413,
1.5075335502624512,
-0.09976627677679062,
-1.3418065309524536,
0.14190419018268585,
-1.0838178396224976,
1.709913730621338,
-0.8657216429710388,
0.3817557394504547,
-1.6710846424102783,
0.011488673277199268,
-0.5867823362350464... |
https://github.com/huggingface/datasets/issues/5537 | Increase speed of data files resolution | I think we can make the data files resolution (significantly) faster in 2 steps:
1. `glob` calls `find` (which in turn calls `ls`), so we need `find` to be fast, and this can be achieved by fetching all the entries in a single API call and avoiding calls to `ls`. Implementing this for `HfFileSystem.find` (the one in... | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `res... | 297 | 135 | Increase speed of data files resolution
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on a... | [
-1.130543828010559,
-0.8842692971229553,
-0.7165008783340454,
1.5434623956680298,
-0.11563576012849808,
-1.3647692203521729,
0.23286041617393494,
-1.1472439765930176,
1.7751344442367554,
-0.854800820350647,
0.3880634903907776,
-1.6561031341552734,
0.08156341314315796,
-0.6451812386512756,
... |
https://github.com/huggingface/datasets/issues/5537 | Increase speed of data files resolution | Good idea :)
For 2:
That would work ! It's also possible to have a FileSystem with a cache on `.find` and use it inside the resolver passed to `_get_data_files_patterns`. Right now they're pretty simple:
```python
# for remote repositories
resolver = partial(_resolve_single_pattern_in_dataset_repository, da... | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `res... | 297 | 53 | Increase speed of data files resolution
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on a... | [
-1.1449564695358276,
-0.8754437565803528,
-0.7184962034225464,
1.6449583768844604,
-0.0929897353053093,
-1.3104180097579956,
0.18239636719226837,
-1.0742056369781494,
1.7528446912765503,
-0.8558838367462158,
0.36126774549484253,
-1.6449629068374634,
0.040926672518253326,
-0.598083913326263... |
https://github.com/huggingface/datasets/issues/5537 | Increase speed of data files resolution | something like this maybe (with Quentin's reimplementation of `HfFilesystem.find`)?
```
@lru_cache(max_size=None)
def _find(self, path, maxdepth=None, withdirs=False, detail=False, **kwargs):
```
In any case please let me know if I can help in any way! | Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on all the data files.
This comes from `res... | 297 | 33 | Increase speed of data files resolution
Certain datasets like `bigcode/the-stack-dedup` have so many files that loading them takes forever right from the data files resolution step.
`datasets` uses file patterns to check the structure of the repository but it takes too much time to iterate over and over again on a... | [
-1.149220585823059,
-0.8485963344573975,
-0.7264913320541382,
1.5436252355575562,
-0.17794209718704224,
-1.2910369634628296,
0.20202431082725525,
-1.0815874338150024,
1.7180505990982056,
-0.8715571761131287,
0.41410377621650696,
-1.6535228490829468,
0.03643404692411423,
-0.6431575417518616... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | Hi ! `enc` is not hashable:
```python
import tiktoken
from datasets.fingerprint import Hasher
enc = tiktoken.get_encoding("gpt2")
Hasher.hash(enc)
# raises TypeError: cannot pickle 'builtins.CoreBPE' object
```
It happens because it's not picklable, and because of that it's not possible to cache the result of... | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 298 | 83 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... | [
-1.2845170497894287,
-0.9991153478622437,
-0.6881342530250549,
1.4902421236038208,
-0.19394367933273315,
-1.1103838682174683,
0.09438752382993698,
-1.0431798696517944,
1.6962690353393555,
-0.7956609725952148,
0.2443753033876419,
-1.6626079082489014,
0.030678048729896545,
-0.537995517253875... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | @lhoestq Thank you for the explanation and advice. Will relay all of this to the repo where this (non)issue arose.
Great job with huggingface! | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 298 | 24 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... | [
-1.2881124019622803,
-1.003926396369934,
-0.6919583678245544,
1.4989726543426514,
-0.18968906998634338,
-1.1030694246292114,
0.09339943528175354,
-1.042127251625061,
1.690084457397461,
-0.7929410934448242,
0.23975349962711334,
-1.664184331893921,
0.024867696687579155,
-0.5398193597793579,
... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | Just a heads up that when I'm trying to use TikToken along with the a given Dataset `.map()` method, I am still met with the following error :
```
File "/opt/conda/lib/python3.8/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/opt/conda/lib/python3.8/... | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 298 | 60 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... | [
-1.2856485843658447,
-1.0056039094924927,
-0.6918731331825256,
1.4934698343276978,
-0.1911541372537613,
-1.1113144159317017,
0.0905895009636879,
-1.047580599784851,
1.6914012432098389,
-0.7979883551597595,
0.24175648391246796,
-1.662598729133606,
0.026182375848293304,
-0.5417219400405884,
... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | @lhoestq @edhenry I am on datasets version `'2.12.0'. I see the same `TypeError: cannot pickle 'builtins.CoreBPE' object` that others are seeing. | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 298 | 21 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... | [
-1.2881124019622803,
-1.003926396369934,
-0.6919583678245544,
1.4989726543426514,
-0.18968906998634338,
-1.1030694246292114,
0.09339943528175354,
-1.042127251625061,
1.690084457397461,
-0.7929410934448242,
0.23975349962711334,
-1.664184331893921,
0.024867696687579155,
-0.5398193597793579,
... |
https://github.com/huggingface/datasets/issues/5536 | Failure to hash function when using .map() | I am able to reproduce this on datasets 2.14.2. The `datasets.disable_caching()` doesn't work around it.
@lhoestq - you might want to reopen this issue. Because of this issue folks won't be able run Karpathy's NanoGPT :(. | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and ca... | 298 | 36 | Failure to hash function when using .map()
### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle... | [
-1.287795066833496,
-1.004431128501892,
-0.6955685019493103,
1.4929314851760864,
-0.19046670198440552,
-1.1076983213424683,
0.0928802639245987,
-1.0458276271820068,
1.6926813125610352,
-0.7969526648521423,
0.2435372918844223,
-1.659211277961731,
0.030743863433599472,
-0.5391830205917358,
... |
https://github.com/huggingface/datasets/issues/5534 | map() breaks at certain dataset size when using Array3D | Hi! This code works for me locally or in Colab. What's the output of `python -c "import pyarrow as pa; print(pa.__version__)"` when you run it inside your environment? | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent cal... | 299 | 28 | map() breaks at certain dataset size when using Array3D
### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with... | [
-1.1591157913208008,
-0.9008998870849609,
-0.7034623622894287,
1.430408239364624,
-0.04434133321046829,
-1.2292879819869995,
0.06118039786815643,
-0.9924614429473877,
1.5379427671432495,
-0.7380186319351196,
0.145754873752594,
-1.6440602540969849,
-0.13232824206352234,
-0.47561711072921753... |
https://github.com/huggingface/datasets/issues/5534 | map() breaks at certain dataset size when using Array3D | Thanks for looking into this!
The output of `python -c "import pyarrow as pa; print(pa.__version__)"` is:
```
11.0.0
```
I did the following to setup the environment:
```
conda create -n datasets_debug python=3.9
conda activate datasets_debug
pip install datasets==2.9.0
```
I just tested this on another... | ### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with the following exception:
```
Traceback (most recent cal... | 299 | 60 | map() breaks at certain dataset size when using Array3D
### Describe the bug
`map()` magically breaks when using a `Array3D` feature and mapping it. I created a very simple dummy dataset (see below). When filtering it down to 95 elements I can apply map, but it breaks when filtering it down to just 96 entries with... | [
-1.1591157913208008,
-0.9008998870849609,
-0.7034623622894287,
1.430408239364624,
-0.04434133321046829,
-1.2292879819869995,
0.06118039786815643,
-0.9924614429473877,
1.5379427671432495,
-0.7380186319351196,
0.145754873752594,
-1.6440602540969849,
-0.13232824206352234,
-0.47561711072921753... |
https://github.com/huggingface/datasets/issues/5532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | Hi! You can get this behavior by specifying `stratify_by_column="label"` in `train_test_split`.
This is the full example:
```python
import numpy as np
from datasets import Dataset, ClassLabel
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "examp... | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
... | 300 | 88 | train_test_split in arrow_dataset does not ensure to keep single classes in test set
### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
##... | [
-1.209433674812317,
-0.9766694903373718,
-0.7226002812385559,
1.6408116817474365,
-0.22698737680912018,
-1.1188631057739258,
0.1228037104010582,
-1.078384280204773,
1.5326387882232666,
-0.7326000332832336,
0.2807154655456543,
-1.626745343208313,
-0.02218104898929596,
-0.5965240597724915,
... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Thanks for reporting, @TJ-Solergibert.
We cannot access your Colab notebook: `There was an error loading this notebook. Ensure that the file is accessible and try again.`
Could you please make it publicly accessible?
| ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 301 | 33 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... | [
-1.2064826488494873,
-0.9189274311065674,
-0.7242937088012695,
1.4140011072158813,
-0.16993944346904755,
-1.2470320463180542,
0.10708995163440704,
-1.0634067058563232,
1.6294629573822021,
-0.7314059138298035,
0.2997436225414276,
-1.6280189752578735,
0.07319454848766327,
-0.5316691398620605... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | I swear it's public, I've checked the settings and I've been able to open it in incognito mode.
Notebook: https://colab.research.google.com/drive/1JCrS7FlGfu_kFqChMrwKZ_bpabnIMqbP?usp=sharing
Anyway, this is the code to reproduce the error:
```python3
from datasets import ClassLabel
from datasets import load... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 301 | 226 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... | [
-1.2144064903259277,
-0.9434266686439514,
-0.7320006489753723,
1.4167879819869995,
-0.18503841757774353,
-1.2450969219207764,
0.12323015183210373,
-1.05970299243927,
1.626576542854309,
-0.7105931639671326,
0.28908371925354004,
-1.6303064823150635,
0.06679137051105499,
-0.5276129245758057,
... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Thanks, @TJ-Solergibert. I can access your notebook now. Maybe it was just a temporary issue.
At first sight, it seems something related to your data: maybe some of the examples do not have all the transcriptions for all the languages. Then, some of them are null when unrolled. And when trying to concatenate with th... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 301 | 80 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... | [
-1.2202423810958862,
-0.9123347401618958,
-0.7285102605819702,
1.371557354927063,
-0.17458994686603546,
-1.2445634603500366,
0.10292567312717438,
-1.0948625802993774,
1.637644648551941,
-0.72588050365448,
0.28091853857040405,
-1.6314622163772583,
0.06916490197181702,
-0.546938955783844,
... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | See, in this example, "nl" and "ro" transcripts are null:
```python
>>> europarl_ds["test"][:1]
{'original_speech': ['− Señor Presidente, en primer lugar, quisiera felicitar al señor Seeber por el trabajo realizado, porque en su informe se recogen muchas de las preocupaciones manifestadas en esta'],
'original_lang... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 301 | 458 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... | [
-1.1871079206466675,
-0.9079514145851135,
-0.7566453218460083,
1.4227498769760132,
-0.1958572268486023,
-1.2150317430496216,
0.12482152879238129,
-1.1152719259262085,
1.6279278993606567,
-0.7159287333488464,
0.2852400541305542,
-1.6237934827804565,
0.07571223378181458,
-0.5769699811935425,... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | You can fix this issue by forcing the cast of None to str by hand:
- If you replace this line:
```python
source_t += batch[src_lang]
```
- With this line (because the batch size is 1):
```python
source_t += [str(batch[src_lang][0])]
```
- Or with this line (if the batch size were larger than 1):
```python
so... | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 301 | 63 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... | [
-1.2086602449417114,
-0.8798791766166687,
-0.7430040240287781,
1.4533145427703857,
-0.15194453299045563,
-1.268919825553894,
0.1398562639951706,
-1.0587161779403687,
1.6720575094223022,
-0.7352105379104614,
0.3331708610057831,
-1.6114805936813354,
0.10264565795660019,
-0.5713539123535156,
... |
https://github.com/huggingface/datasets/issues/5525 | TypeError: Couldn't cast array of type string to null | Problem solved! Thanks @albertvillanova, now I have even increased the batch size and it's crazy fast :rocket: ! | ### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentioned error.
I alredy tried reseting the shorter strings... | 301 | 18 | TypeError: Couldn't cast array of type string to null
### Describe the bug
Processing a dataset I alredy uploaded to the Hub (https://huggingface.co/datasets/tj-solergibert/Europarl-ST) I found that for some splits and some languages (test split, source_lang = "nl") after applying a map function I get the mentione... | [
-1.2123744487762451,
-0.9240867495536804,
-0.741775393486023,
1.4224097728729248,
-0.15492932498455048,
-1.2488129138946533,
0.10072925686836243,
-1.0697872638702393,
1.6257985830307007,
-0.7128981351852417,
0.30589401721954346,
-1.6207425594329834,
0.06911633908748627,
-0.5360549688339233... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Hi! This behavior stems from these lines:
https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/np_formatter.py#L45-L46
I agree we should preserve the original type whenever possible and downcast explicitly with a warning.
@lhoestq Do you remember why we ... | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 38 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | I was also wondering why the default type logic is needed. Me just deleting it is probably too naive of a solution. | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 22 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Hmm I think the idea was to end up with the usual default precision for deep learning models - no matter how the data was stored or where it comes from.
For example in NLP we store tokens using an optimized low precision to save disk space, but when we set the format to `torch` we actually need to get `int64`. Altho... | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 123 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Unfortunately removing it for integers is a breaking change for most `transformers` + `datasets` users for NLP (which is a common case). Removing it for floats is a breaking change for `transformers` + `datasets` for ASR as well. And it also is a breaking change for the other users relying on this behavior.
Therefor... | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 102 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | @lhoestq It should be fine to remove this conversion in Datasets 3.0, no? For now, we can warn the user (with a log message) about the future change when the default type is changed. | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 34 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | Let's see with the transformers team if it sounds reasonable ? We'd have to fix multiple example scripts though.
If it's not ok we can also explore keeping this behavior only for tokens and audio data. | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 36 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | IMO being coupled with Transformers can lead to unexpected behavior when one tries to use our lib without pairing it with Transformers, so I think it's still important to "fix" this, even if it means we will need to update Transformers' example scripts afterward.
| ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 44 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5517 | `with_format("numpy")` silently downcasts float64 to float32 features | For others that run into the same issue: A temporary workaround for me is this:
```python
def numpy_transform(batch):
return {key: np.asarray(val) for key, val in batch.items()}
dataset = dataset.with_transform(numpy_transform)
``` | ### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict({'a': [1.0, 2.0, 3.0]}).with_format("numpy")
print(... | 302 | 30 | `with_format("numpy")` silently downcasts float64 to float32 features
### Describe the bug
When I create a dataset with a `float64` feature, then apply numpy formatting the returned numpy arrays are silently downcasted to `float32`.
### Steps to reproduce the bug
```python
import datasets
dataset = dataset... | [
-1.213478684425354,
-0.9068505167961121,
-0.7341683506965637,
1.5133947134017944,
-0.2038259506225586,
-1.2343934774398804,
0.2003604620695114,
-1.0582244396209717,
1.7293082475662231,
-0.7044316530227661,
0.34583938121795654,
-1.7139561176300049,
0.08779007941484451,
-0.6016902923583984,
... |
https://github.com/huggingface/datasets/issues/5514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | Hi, thanks for noticing this! We can't just remove the cache control as this allows us to control where the arrow files generated by the ops are written (cached on disk if enabled or a temporary directory if disabled). The right way to address this inconsistency would be by having `load_from_cache_file=None` by default... | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_... | 303 | 54 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documenta... | [
-1.0274901390075684,
-0.8360885381698608,
-0.7225093841552734,
1.6265997886657715,
-0.13500112295150757,
-1.3072118759155273,
0.2360895425081253,
-1.1024781465530396,
1.8123488426208496,
-0.8158617615699768,
0.4664093255996704,
-1.6378223896026611,
0.08813241124153137,
-0.6918321847915649,... |
https://github.com/huggingface/datasets/issues/5514 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file` | Hi! Yes, this seems more plausible. I can implement that. One last thing is the type annotation `load_from_cache_file: bool = None`. Which I then would change to `load_from_cache_file: Optional[bool] = None`. | ### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documentation (and source) states for the parameter `load_from_cache_file`:
```
load_... | 303 | 31 | Improve inconsistency of `Dataset.map` interface for `load_from_cache_file`
### Feature request
1. Replace the `load_from_cache_file` default value to `True`.
2. Remove or alter checks from `is_caching_enabled` logic.
### Motivation
I stumbled over an inconsistency in the `Dataset.map` interface. The documenta... | [
-1.0353375673294067,
-0.8243843913078308,
-0.7398462295532227,
1.6503654718399048,
-0.12476044148206711,
-1.313510537147522,
0.28688573837280273,
-1.0889464616775513,
1.8468552827835083,
-0.8350455164909363,
0.4832695722579956,
-1.6535820960998535,
0.07189057022333145,
-0.7077955603599548,... |
https://github.com/huggingface/datasets/issues/5513 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name? | Hi! Let's not do this - renaming it would be a breaking change, and going through the deprecation cycle is only worth it if it improves user experience. | Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, shouldn't that be renamed to `format_type` before the 3.0.0 is released?
Just wanted to get your inp... | 304 | 28 | Some functions use a param named `type` shouldn't that be avoided since it's a Python reserved name?
Hi @mariosasko, @lhoestq, or whoever reads this! :)
After going through `ArrowDataset.set_format` I found out that the `type` param is actually named `type` which is a Python reserved name as you may already know, ... | [
-1.181472659111023,
-0.9077042937278748,
-0.7534422874450684,
1.5034836530685425,
-0.23491701483726501,
-1.2804064750671387,
0.09039685130119324,
-1.043567180633545,
1.6588337421417236,
-0.9018839597702026,
0.48319265246391296,
-1.7961487770080566,
0.07059219479560852,
-0.5552337765693665,... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.