html_url
stringlengths 51
51
| title
stringlengths 6
280
| comments
stringlengths 67
24.7k
| body
stringlengths 51
36.2k
β | comment_length
int64 16
1.45k
| text
stringlengths 159
38.3k
|
---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | I guess the cache got corrupted due to a previous issue with Google Drive service.
The cache should be regenerated, e.g. by passing `download_mode="force_redownload"`.
CC: @severo | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 26 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
I guess the cache got corrupted due to a previous issue with Google Drive service.
The cache should be regenerated, e.g. by passing `download_mode="force_redownload"`.
CC: @severo |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode="force_redownload"` doesn't help. But yes indeed the cache must be refreshed.
The CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.
At Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 81 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode="force_redownload"` doesn't help. But yes indeed the cache must be refreshed.
The CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.
At Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 16 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Sounds good. I was looking for another host of this dataset but couldn't find any (yet) | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 16 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Sounds good. I was looking for another host of this dataset but couldn't find any (yet) |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | It seems like the issue is with the streaming mode, not with the hosting:
```python
>>> import datasets
>>> dataset = datasets.load_dataset('cnn_dailymail', name="3.0.0", split="train", streaming=True, download_mode="force_redownload")
Downloading builder script: 9.35kB [00:00, 10.2MB/s]
Downloading metadata: 9.50kB [00:00, 12.2MB/s]
>>> len(list(dataset))
0
>>> dataset = datasets.load_dataset('cnn_dailymail', name="3.0.0", split="train", streaming=False)
Reusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)
>>> len(dataset)
287113
```
Note, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.
<img width="1511" alt="Capture dβeΜcran 2022-04-12 aΜ 11 50 46" src="https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png">
| ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 101 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
It seems like the issue is with the streaming mode, not with the hosting:
```python
>>> import datasets
>>> dataset = datasets.load_dataset('cnn_dailymail', name="3.0.0", split="train", streaming=True, download_mode="force_redownload")
Downloading builder script: 9.35kB [00:00, 10.2MB/s]
Downloading metadata: 9.50kB [00:00, 12.2MB/s]
>>> len(list(dataset))
0
>>> dataset = datasets.load_dataset('cnn_dailymail', name="3.0.0", split="train", streaming=False)
Reusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)
>>> len(dataset)
287113
```
Note, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.
<img width="1511" alt="Capture dβeΜcran 2022-04-12 aΜ 11 50 46" src="https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png">
|
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 21 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Yes it definitely should ! I don't have the bandwidth to work on this right now though | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 17 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Yes it definitely should ! I don't have the bandwidth to work on this right now though |
https://github.com/huggingface/datasets/issues/3969 | Cannot preview cnn_dailymail dataset | Indeed, streaming was not supported: tgz archives were not properly iterated.
I've opened a PR to support streaming.
However, keep in mind that Google Drive will keep generating issues from time to time, like 403,... | ## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 35 | Cannot preview cnn_dailymail dataset
## Dataset viewer issue for '*cnn_dailymail*'
**Link:** https://huggingface.co/datasets/cnn_dailymail
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Indeed, streaming was not supported: tgz archives were not properly iterated.
I've opened a PR to support streaming.
However, keep in mind that Google Drive will keep generating issues from time to time, like 403,... |
https://github.com/huggingface/datasets/issues/3968 | Cannot preview 'indonesian-nlp/eli5_id' dataset | Hi @cahya-wirawan, thanks for reporting.
Your dataset is working OK in streaming mode:
```python
In [1]: from datasets import load_dataset
...: ds = load_dataset("indonesian-nlp/eli5_id", split="train", streaming=True)
...: item = next(iter(ds))
...: item
Using custom data configuration indonesian-nlp--eli5_id-9fe728a7e760fb7b
Out[1]:
{'q_id': '1oy5tc',
'title': 'dalam sepak bola apa gunanya menyia-nyiakan dua permainan pertama dengan terburu-buru - di tengah - bukan permainan terburu-buru biasa saya mendapatkannya',
'selftext': '',
'document': '',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['ccwtgnz', 'ccwtmho', 'ccwt946', 'ccwvj0u'],
'text': ['Jaga pertahanan tetap jujur, rasakan operan terburu-buru, buka permainan yang lewat. Pelanggaran yang terlalu satu dimensi akan gagal. Dan mereka yang bergegas ke tengah kadang-kadang dapat dibuka lebar-lebar untuk ukuran yard yang besar.',
'Jika Anda melempar bola sepanjang waktu, maka pertahanan akan beradaptasi untuk selalu menutupi umpan. Dengan melakukan permainan lari sederhana sesekali, Anda memaksa pertahanan untuk tetap dekat dan menjaga dari lari. Terkadang, pelanggaran dapat membuat pertahanan lengah dengan berpura-pura berlari dan membebaskan penerima mereka. Selain itu, Anda tidak perlu mendapatkan yard besar di setiap permainan. Terkadang, paling baik mendapatkan beberapa yard sekaligus. Selama Anda mendapatkan yang pertama, Anda dalam kondisi yang baik.',
'Dalam kebanyakan kasus, O-Line seharusnya membuat lubang untuk dilalui kembali. Jika Anda menjalankan terlalu banyak permainan ke luar / melempar, pertahanan akan mengejar. Juga, 2 permainan 5 yard memberi Anda satu set down baru.',
'Saya Anda tidak suka jenis drama itu, tonton CFL. Kami hanya mendapatkan 3 down sehingga Anda tidak bisa menyia-nyiakannya. Lebih banyak lagi yang lewat.'],
'score': [3, 2, 2, 2]},
'title_urls': {'url': []},
'selftext_urls': {'url': []},
'answers_urls': {'url': []}}
```
Therefore, it should be properly rendered in the previewer. Let me ping @severo to have a look at it. | ## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| 271 | Cannot preview 'indonesian-nlp/eli5_id' dataset
## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
Hi @cahya-wirawan, thanks for reporting.
Your dataset is working OK in streaming mode:
```python
In [1]: from datasets import load_dataset
...: ds = load_dataset("indonesian-nlp/eli5_id", split="train", streaming=True)
...: item = next(iter(ds))
...: item
Using custom data configuration indonesian-nlp--eli5_id-9fe728a7e760fb7b
Out[1]:
{'q_id': '1oy5tc',
'title': 'dalam sepak bola apa gunanya menyia-nyiakan dua permainan pertama dengan terburu-buru - di tengah - bukan permainan terburu-buru biasa saya mendapatkannya',
'selftext': '',
'document': '',
'subreddit': 'explainlikeimfive',
'answers': {'a_id': ['ccwtgnz', 'ccwtmho', 'ccwt946', 'ccwvj0u'],
'text': ['Jaga pertahanan tetap jujur, rasakan operan terburu-buru, buka permainan yang lewat. Pelanggaran yang terlalu satu dimensi akan gagal. Dan mereka yang bergegas ke tengah kadang-kadang dapat dibuka lebar-lebar untuk ukuran yard yang besar.',
'Jika Anda melempar bola sepanjang waktu, maka pertahanan akan beradaptasi untuk selalu menutupi umpan. Dengan melakukan permainan lari sederhana sesekali, Anda memaksa pertahanan untuk tetap dekat dan menjaga dari lari. Terkadang, pelanggaran dapat membuat pertahanan lengah dengan berpura-pura berlari dan membebaskan penerima mereka. Selain itu, Anda tidak perlu mendapatkan yard besar di setiap permainan. Terkadang, paling baik mendapatkan beberapa yard sekaligus. Selama Anda mendapatkan yang pertama, Anda dalam kondisi yang baik.',
'Dalam kebanyakan kasus, O-Line seharusnya membuat lubang untuk dilalui kembali. Jika Anda menjalankan terlalu banyak permainan ke luar / melempar, pertahanan akan mengejar. Juga, 2 permainan 5 yard memberi Anda satu set down baru.',
'Saya Anda tidak suka jenis drama itu, tonton CFL. Kami hanya mendapatkan 3 down sehingga Anda tidak bisa menyia-nyiakannya. Lebih banyak lagi yang lewat.'],
'score': [3, 2, 2, 2]},
'title_urls': {'url': []},
'selftext_urls': {'url': []},
'answers_urls': {'url': []}}
```
Therefore, it should be properly rendered in the previewer. Let me ping @severo to have a look at it. |
https://github.com/huggingface/datasets/issues/3968 | Cannot preview 'indonesian-nlp/eli5_id' dataset | Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work? | ## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| 31 | Cannot preview 'indonesian-nlp/eli5_id' dataset
## Dataset viewer issue for '*indonesian-nlp/eli5_id*'
**Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id
I can not see the dataset preview.
```
Server Error
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work? |
https://github.com/huggingface/datasets/issues/3965 | TypeError: Couldn't cast array of type for JSONLines dataset | Hi @lewtun, thanks for reporting.
It seems that our library fails at inferring the dtype of the columns:
- `milestone`
- `performed_via_github_app`
(and assigns them `null` dtype). | ## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_url
import pandas as pd
# returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl'
data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset")
# throws TypeError: Couldn't cast array of type
dset = load_dataset("json", data_files=data_files, split="test")
# no problem with pandas - note this take a while as the file is >2GB
df = pd.read_json(data_files, orient="records", lines=True)
df.head()
```
## Expected results
I can load any line-separated JSON file, similar to pandas.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split
writer.write_table(table)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast
return cast_table_to_features(table, Features.from_arrow_schema(schema))
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]>
to
null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| 27 | TypeError: Couldn't cast array of type for JSONLines dataset
## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_url
import pandas as pd
# returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl'
data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset")
# throws TypeError: Couldn't cast array of type
dset = load_dataset("json", data_files=data_files, split="test")
# no problem with pandas - note this take a while as the file is >2GB
df = pd.read_json(data_files, orient="records", lines=True)
df.head()
```
## Expected results
I can load any line-separated JSON file, similar to pandas.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split
writer.write_table(table)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast
return cast_table_to_features(table, Features.from_arrow_schema(schema))
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]>
to
null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
Hi @lewtun, thanks for reporting.
It seems that our library fails at inferring the dtype of the columns:
- `milestone`
- `performed_via_github_app`
(and assigns them `null` dtype). |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:
```python
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')
```
Let us know if that resolves the issue. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 40 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:
```python
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')
```
Let us know if that resolves the issue. |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:
>
> ```python
> >>> from datasets import load_dataset
> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')
> ```
>
> Let us know if that resolves the issue.
Sorry, replied late.
Thanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 437283.97it/s]
Resolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 89094.29it/s]
Using custom data configuration default-baebca6347576b33
Downloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...
Downloading data files #0: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82289.56obj/s]
Downloading data files #1: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 73559.11obj/s]
Downloading data files #2: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81600.46obj/s]
Downloading data files #3: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79691.56obj/s]
Downloading data files #4: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82341.37obj/s]
Downloading data files #5: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 75784.46obj/s]
Downloading data files #6: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81466.18obj/s]
Downloading data files #7: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82320.27obj/s]
Downloading data files #8: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 78094.00obj/s]
Downloading data files #9: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84057.59obj/s]
Downloading data files #10: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 83082.31obj/s]
Downloading data files #11: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79944.21obj/s]
Downloading data files #12: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84569.77obj/s]
Downloading data files #13: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84949.63obj/s]
Downloading data files #14: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 80666.53obj/s]
Downloading data files #15: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80072/80072 [00:01<00:00, 76723.20obj/s]
^[[Bloading data files #8: 94%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 75061/80073 [00:00<00:00, 82609.89obj/s]
Downloading data files #9: 85%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 68120/80073 [00:00<00:00, 83868.54obj/s]
Downloading data files #9: 96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 76784/80073 [00:00<00:00, 84722.34obj/s]
Downloading data files #10: 75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 59995/80073 [00:00<00:00, 84148.19obj/s]
Downloading data files #10: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77412/80073 [00:00<00:00, 85724.53obj/s]
Downloading data files #11: 71%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57032/80073 [00:00<00:00, 79930.58obj/s]
Downloading data files #11: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 73277/80073 [00:00<00:00, 78091.27obj/s]
Downloading data files #12: 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 69125/80073 [00:00<00:00, 84723.02obj/s]
Downloading data files #12: 97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77803/80073 [00:00<00:00, 85351.59obj/s]
Downloading data files #13: 75%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 60356/80073 [00:00<00:00, 84833.35obj/s]
Downloading data files #13: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77368/80073 [00:00<00:00, 84475.10obj/s]
Downloading data files #14: 72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57751/80073 [00:00<00:00, 80727.33obj/s]
Downloading data files #14: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 74022/80073 [00:00<00:00, 78703.16obj/s]
Downloading data files #15: 78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 62724/80072 [00:00<00:00, 78387.33obj/s]
Downloading data files #15: 99%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 78933/80072 [00:01<00:00, 79353.63obj/s]
``` | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 378 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:
>
> ```python
> >>> from datasets import load_dataset
> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')
> ```
>
> Let us know if that resolves the issue.
Sorry, replied late.
Thanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}
>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 437283.97it/s]
Resolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 89094.29it/s]
Using custom data configuration default-baebca6347576b33
Downloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...
Downloading data files #0: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82289.56obj/s]
Downloading data files #1: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 73559.11obj/s]
Downloading data files #2: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81600.46obj/s]
Downloading data files #3: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79691.56obj/s]
Downloading data files #4: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82341.37obj/s]
Downloading data files #5: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 75784.46obj/s]
Downloading data files #6: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 81466.18obj/s]
Downloading data files #7: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 82320.27obj/s]
Downloading data files #8: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 78094.00obj/s]
Downloading data files #9: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84057.59obj/s]
Downloading data files #10: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 83082.31obj/s]
Downloading data files #11: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:01<00:00, 79944.21obj/s]
Downloading data files #12: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84569.77obj/s]
Downloading data files #13: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 84949.63obj/s]
Downloading data files #14: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80073/80073 [00:00<00:00, 80666.53obj/s]
Downloading data files #15: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80072/80072 [00:01<00:00, 76723.20obj/s]
^[[Bloading data files #8: 94%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 75061/80073 [00:00<00:00, 82609.89obj/s]
Downloading data files #9: 85%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 68120/80073 [00:00<00:00, 83868.54obj/s]
Downloading data files #9: 96%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 76784/80073 [00:00<00:00, 84722.34obj/s]
Downloading data files #10: 75%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 59995/80073 [00:00<00:00, 84148.19obj/s]
Downloading data files #10: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77412/80073 [00:00<00:00, 85724.53obj/s]
Downloading data files #11: 71%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57032/80073 [00:00<00:00, 79930.58obj/s]
Downloading data files #11: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 73277/80073 [00:00<00:00, 78091.27obj/s]
Downloading data files #12: 86%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 69125/80073 [00:00<00:00, 84723.02obj/s]
Downloading data files #12: 97%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77803/80073 [00:00<00:00, 85351.59obj/s]
Downloading data files #13: 75%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 60356/80073 [00:00<00:00, 84833.35obj/s]
Downloading data files #13: 97%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 77368/80073 [00:00<00:00, 84475.10obj/s]
Downloading data files #14: 72%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 57751/80073 [00:00<00:00, 80727.33obj/s]
Downloading data files #14: 92%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 74022/80073 [00:00<00:00, 78703.16obj/s]
Downloading data files #15: 78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 62724/80072 [00:00<00:00, 78387.33obj/s]
Downloading data files #15: 99%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 78933/80072 [00:01<00:00, 79353.63obj/s]
``` |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 21 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs. |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.
ThanksοΌIt's worked well. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 25 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.
ThanksοΌIt's worked well. |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.
I find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.
First loading, it costs about 20 min in my servers.
```
real 19m23.023s
user 21m18.360s
sys 7m59.080s
```
Second reusing, it costs about 15 min in my servers.
```
real 15m20.735s
user 12m22.979s
sys 5m46.960s
```
I think it's too much slow, is there other method to make it faster? | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 83 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.
I find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.
First loading, it costs about 20 min in my servers.
```
real 19m23.023s
user 21m18.360s
sys 7m59.080s
```
Second reusing, it costs about 15 min in my servers.
```
real 15m20.735s
user 12m22.979s
sys 5m46.960s
```
I think it's too much slow, is there other method to make it faster? |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
```python
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example["labels"] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
```
How to know the keys of example? | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 45 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
```python
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example["labels"] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
```
How to know the keys of example? |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Loading the image files slowly, is it because the multiple processes load files at the same time? | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 17 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
Loading the image files slowly, is it because the multiple processes load files at the same time? |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.
> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
>
> ```python
> def collate_fn(examples):
> pixel_values = torch.stack([example["pixel_values"] for example in examples])
> labels = torch.tensor([example["labels"] for example in examples])
> return {"pixel_values": pixel_values, "labels": labels}
> ```
>
> How to know the keys of example?
What do you mean by "could you make some changes".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.
| When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 125 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.
> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
>
> ```python
> def collate_fn(examples):
> pixel_values = torch.stack([example["pixel_values"] for example in examples])
> labels = torch.tensor([example["labels"] for example in examples])
> return {"pixel_values": pixel_values, "labels": labels}
> ```
>
> How to know the keys of example?
What do you mean by "could you make some changes".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.
|
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.
>
> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
> > ```python
> > def collate_fn(examples):
> > pixel_values = torch.stack([example["pixel_values"] for example in examples])
> > labels = torch.tensor([example["labels"] for example in examples])
> > return {"pixel_values": pixel_values, "labels": labels}
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > How to know the keys of example?
>
> What do you mean by "could you make some changes".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.
Thanks for your reply!
1. I did not record the second output, so I run it again.
```
(merak) txacs@master:/dat/txacs/test$ time python test.py
Resolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 469497.89it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 70123.73it/s]
Using custom data configuration default-baebca6347576b33
Reusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.37s/it]
Loading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow
Loading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 1281167
})
validation: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
})
real 10m10.413s
user 9m33.195s
sys 2m47.528s
```
Although it cost less time than the last, but still slowly.
2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 269 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.
>
> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`
> > ```python
> > def collate_fn(examples):
> > pixel_values = torch.stack([example["pixel_values"] for example in examples])
> > labels = torch.tensor([example["labels"] for example in examples])
> > return {"pixel_values": pixel_values, "labels": labels}
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > How to know the keys of example?
>
> What do you mean by "could you make some changes".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.
Thanks for your reply!
1. I did not record the second output, so I run it again.
```
(merak) txacs@master:/dat/txacs/test$ time python test.py
Resolving data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1281167/1281167 [00:02<00:00, 469497.89it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 50001/50001 [00:00<00:00, 70123.73it/s]
Using custom data configuration default-baebca6347576b33
Reusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.37s/it]
Loading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow
Loading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 1281167
})
validation: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
})
real 10m10.413s
user 9m33.195s
sys 2m47.528s
```
Although it cost less time than the last, but still slowly.
2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'. |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | Thanks for rerunning the code to record the output. Is it the `"Resolving data files"` part on your machine that takes a long time to complete, or is it `"Loading cached processed dataset at ..."Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue. | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 64 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
Thanks for rerunning the code to record the output. Is it the `"Resolving data files"` part on your machine that takes a long time to complete, or is it `"Loading cached processed dataset at ..."Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue. |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | > Thanks for rerunning the code to record the output. Is it the `"Resolving data files"` part on your machine that takes a long time to complete, or is it `"Loading cached processed dataset at ..."Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.
Sounds good! The main position, which costs long time, is from program starting to `"Resolving data files"`. I hope you can solve it early, thanks! | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 90 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
> Thanks for rerunning the code to record the output. Is it the `"Resolving data files"` part on your machine that takes a long time to complete, or is it `"Loading cached processed dataset at ..."Λ`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.
Sounds good! The main position, which costs long time, is from program starting to `"Resolving data files"`. I hope you can solve it early, thanks! |
https://github.com/huggingface/datasets/issues/3960 | Load local dataset error | I'm getting this problem. Script has been stuck at this part for the past 15 or so minutes:
`Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββ| 107/107 [00:00<00:00, 472.74it/s]`
I had everything working fine on an AWS EC2 node with a single GPU. Then I created an image based on the single GPU machine, and spun up a new one with 4 GPUs, so I got all of the training data ready at .cache.
Turned off all checks with `verification_mode='no_checks'`. Logged in with huggingface-cli again just to be sure.
Interrupting shows the code is stuck here:
```
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py", line 200, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py", line 336, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py", line 357, in read_table
return table_cls.from_file(filename)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py", line 1059, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py", line 66, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
```
Is it just going to take a while or am I going to run out of money? :sweat_smile:
edit: ping @mariosasko | When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | 162 | Load local dataset error
When i used the datasets==1.11.0οΌ it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks!
I'm getting this problem. Script has been stuck at this part for the past 15 or so minutes:
`Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββ| 107/107 [00:00<00:00, 472.74it/s]`
I had everything working fine on an AWS EC2 node with a single GPU. Then I created an image based on the single GPU machine, and spun up a new one with 4 GPUs, so I got all of the training data ready at .cache.
Turned off all checks with `verification_mode='no_checks'`. Logged in with huggingface-cli again just to be sure.
Interrupting shows the code is stuck here:
```
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py", line 200, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py", line 336, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py", line 357, in read_table
return table_cls.from_file(filename)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py", line 1059, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py", line 66, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
```
Is it just going to take a while or am I going to run out of money? :sweat_smile:
edit: ping @mariosasko |
https://github.com/huggingface/datasets/issues/3959 | Medium-sized dataset conversion from pandas causes a crash | Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ? | Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas
table = InMemoryTable.from_pandas(
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas
return cls(pa.Table.from_pandas(*args, **kwargs))
File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)
```
## Steps to reproduce the bug
I have a dataset made from replicated single example mocking a dict representation of a publication.
I copy over this example 140k times and create a pandas frame.
I use 'Dataset.from_pandas' and boom
```python
# Sample code to reproduce the bug
import copy
import datasets
import pandas
# serialized dict is quite long to be realistic representation of a publication content
paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}")
d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100))
arrow=datasets.Dataset.from_pandas(d)
```
## Expected results
The dataset should be converted without error.
## Actual results
Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets==1.18.4 pandas==1.3.5
- Platform: macOS 11.6 or CentOS Linux 7 (Core)
- Python version: Python 3.9.7
- PyArrow version: pyarrow==3.0.0
| 18 | Medium-sized dataset conversion from pandas causes a crash
Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas
table = InMemoryTable.from_pandas(
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas
return cls(pa.Table.from_pandas(*args, **kwargs))
File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)
```
## Steps to reproduce the bug
I have a dataset made from replicated single example mocking a dict representation of a publication.
I copy over this example 140k times and create a pandas frame.
I use 'Dataset.from_pandas' and boom
```python
# Sample code to reproduce the bug
import copy
import datasets
import pandas
# serialized dict is quite long to be realistic representation of a publication content
paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}")
d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100))
arrow=datasets.Dataset.from_pandas(d)
```
## Expected results
The dataset should be converted without error.
## Actual results
Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets==1.18.4 pandas==1.3.5
- Platform: macOS 11.6 or CentOS Linux 7 (Core)
- Python version: Python 3.9.7
- PyArrow version: pyarrow==3.0.0
Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ? |
https://github.com/huggingface/datasets/issues/3959 | Medium-sized dataset conversion from pandas causes a crash | IΒ΄m getting the same problem with some files, @albertvillanova did you find a solution to this? | Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas
table = InMemoryTable.from_pandas(
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas
return cls(pa.Table.from_pandas(*args, **kwargs))
File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)
```
## Steps to reproduce the bug
I have a dataset made from replicated single example mocking a dict representation of a publication.
I copy over this example 140k times and create a pandas frame.
I use 'Dataset.from_pandas' and boom
```python
# Sample code to reproduce the bug
import copy
import datasets
import pandas
# serialized dict is quite long to be realistic representation of a publication content
paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}")
d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100))
arrow=datasets.Dataset.from_pandas(d)
```
## Expected results
The dataset should be converted without error.
## Actual results
Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets==1.18.4 pandas==1.3.5
- Platform: macOS 11.6 or CentOS Linux 7 (Core)
- Python version: Python 3.9.7
- PyArrow version: pyarrow==3.0.0
| 16 | Medium-sized dataset conversion from pandas causes a crash
Hi, I am suffering from the following issue:
## Describe the bug
Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash:
```
File "/home/datasets_crash.py", line 7, in <module>
arrow=datasets.Dataset.from_pandas(d)
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas
table = InMemoryTable.from_pandas(
File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas
return cls(pa.Table.from_pandas(*args, **kwargs))
File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)
```
## Steps to reproduce the bug
I have a dataset made from replicated single example mocking a dict representation of a publication.
I copy over this example 140k times and create a pandas frame.
I use 'Dataset.from_pandas' and boom
```python
# Sample code to reproduce the bug
import copy
import datasets
import pandas
# serialized dict is quite long to be realistic representation of a publication content
paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}")
d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100))
arrow=datasets.Dataset.from_pandas(d)
```
## Expected results
The dataset should be converted without error.
## Actual results
Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets==1.18.4 pandas==1.3.5
- Platform: macOS 11.6 or CentOS Linux 7 (Core)
- Python version: Python 3.9.7
- PyArrow version: pyarrow==3.0.0
IΒ΄m getting the same problem with some files, @albertvillanova did you find a solution to this? |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | Hi @amirj, thanks for reporting.
At first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.
Feel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility
> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made. | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| 66 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Hi @amirj, thanks for reporting.
At first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.
Feel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility
> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made. |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | @albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:
```
from elasticsearch import Elasticsearch
es_client = Elasticsearch("http://localhost:9200")
dataset.add_elasticsearch_index(column="e1", es_client=es_client, es_index_name="e1_index")
``` | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| 30 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:
```
from elasticsearch import Elasticsearch
es_client = Elasticsearch("http://localhost:9200")
dataset.add_elasticsearch_index(column="e1", es_client=es_client, es_index_name="e1_index")
``` |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | Hi @amirj,
I really think it is a version incompatibility issue between your Elasticsearch client and server:
- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'
- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`
Moreover:
- Looking at your stack trace, I deduce you are using Elasticsearch client **"8"** major version:
- the Elasticsearch file "elasticsearch/_sync/client/utils.py" was created in version "8.0.0a1": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4
- you can check your Elasticsearch client version by running this Python code:
```python
import elasticsearch
print(elasticsearch.__version__)
```
- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **"7"** ("7.10.2-SNAPSHOT")
Could you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists? | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| 125 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Hi @amirj,
I really think it is a version incompatibility issue between your Elasticsearch client and server:
- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'
- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`
Moreover:
- Looking at your stack trace, I deduce you are using Elasticsearch client **"8"** major version:
- the Elasticsearch file "elasticsearch/_sync/client/utils.py" was created in version "8.0.0a1": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4
- you can check your Elasticsearch client version by running this Python code:
```python
import elasticsearch
print(elasticsearch.__version__)
```
- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **"7"** ("7.10.2-SNAPSHOT")
Could you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists? |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | ```
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
```
```
TypeError Traceback (most recent call last)
<ipython-input-8-675c6ffe5293> in <module>
1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])
2 from elasticsearch import Elasticsearch
----> 3 es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
``` | ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| 179 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
```
from elasticsearch import Elasticsearch
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
```
```
TypeError Traceback (most recent call last)
<ipython-input-8-675c6ffe5293> in <module>
1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])
2 from elasticsearch import Elasticsearch
----> 3 es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
C:\ProgramData\Anaconda3\lib\site-packages\elasticsearch\_sync\client\utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
``` |
https://github.com/huggingface/datasets/issues/3956 | TypeError: __init__() missing 1 required positional argument: 'scheme' | @raj713335, thanks for reporting.
Please note that in your code example, you are not using our `datasets` library.
Thus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py
| ## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
| 30 | TypeError: __init__() missing 1 required positional argument: 'scheme'
## Describe the bug
Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
```
## Expected results
[Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch)
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-8fb51aa33961> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200")
~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3777 """
3778 with self.formatted_as(type=None, columns=[column]):
-> 3779 super().add_elasticsearch_index(
3780 column=column,
3781 index_name=index_name,
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
587 """
588 index_name = index_name if index_name is not None else column
--> 589 es_index = ElasticSearchIndex(
590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
591 )
~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config)
123 from elasticsearch import Elasticsearch # noqa: F811
124
--> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}])
126 self.es_index_name = (
127 es_index_name
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)
310
311 if _transport is None:
--> 312 node_configs = client_node_configs(
313 hosts,
314 cloud_id=cloud_id,
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs)
99 else:
100 assert hosts is not None
--> 101 node_configs = hosts_to_node_configs(hosts)
102
103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts)
142
143 elif isinstance(host, Mapping):
--> 144 node_configs.append(host_mapping_to_node_config(host))
145 else:
146 raise ValueError(
~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host)
209 options["path_prefix"] = options.pop("url_prefix")
210
--> 211 return NodeConfig(**options) # type: ignore
212
213
TypeError: __init__() missing 1 required positional argument: 'scheme'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Mac
- Python version: 3.8.0
- PyArrow version: 7.0.0
- ElaticSearch Info:
{
"name" : "byname",
"cluster_name" : "elasticsearch_brew",
"cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA",
"version" : {
"number" : "7.10.2-SNAPSHOT",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "unknown",
"build_date" : "2021-01-16T01:41:27.115673Z",
"build_snapshot" : true,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
@raj713335, thanks for reporting.
Please note that in your code example, you are not using our `datasets` library.
Thus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py
|
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | Hi @MatanBenChorin, thanks for reporting.
Please, take into account that the preview may take some time until it properly renders (we are working to reduce this time).
Maybe @severo can give more details on this. | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 35 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
Hi @MatanBenChorin, thanks for reporting.
Please, take into account that the preview may take some time until it properly renders (we are working to reduce this time).
Maybe @severo can give more details on this. |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:
```
Server Error
Status code: 400
Exception: NameError
Message: name 'HebrewSquad' is not defined
``` | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 29 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:
```
Server Error
Status code: 400
Exception: NameError
Message: name 'HebrewSquad' is not defined
``` |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)
```python
>>> import datasets as ds
>>> hf_token = "hf_..." # <- required because the dataset is gated
>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)
...
NameError: name 'HebrewSquad' is not defined
``` | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 49 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)
```python
>>> import datasets as ds
>>> hf_token = "hf_..." # <- required because the dataset is gated
>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)
...
NameError: name 'HebrewSquad' is not defined
``` |
https://github.com/huggingface/datasets/issues/3954 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset | Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)
Here is the fix @MatanBenChorin :
```diff
- HebrewSquad(
+ HebrewSquadConfig(
``` | ## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes | 20 | The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1'
**Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true
The dataset preview is not available for this dataset.
Am I the one who added this dataset ? Yes
Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)
Here is the fix @MatanBenChorin :
```diff
- HebrewSquad(
+ HebrewSquadConfig(
``` |
https://github.com/huggingface/datasets/issues/3952 | Checksum error for glue sst2, stsb, rte etc datasets | Hi, @ravindra-ut.
I'm sorry but I can't reproduce your problem:
```python
In [1]: from datasets import load_dataset
In [2]: ds = load_dataset("glue", "sst2")
Downloading builder script: 28.8kB [00:00, 11.6MB/s]
Downloading metadata: 28.7kB [00:00, 12.9MB/s]
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.44M/7.44M [00:01<00:00, 5.82MB/s]
Dataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data.
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 895.96it/s]
In [3]: ds
Out[2]:
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 67349
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 872
})
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1821
})
})
```
Moreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com
However, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229
Could you please try to update `datasets`
```shell
pip install -U datasets
```
and then force redownload
```python
ds = load_dataset("glue", "sst2", download_mode="force_redownload")
```
to update the cache?
Please, feel free to reopen this issue if the problem persists. | ## Describe the bug
Checksum error for glue sst2, stsb, rte etc datasets
## Steps to reproduce the bug
```python
>>> nlp.load_dataset('glue', 'sst2')
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73.0/73.0 [00:00<00:00, 18.2kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Expected results
dataset load should succeed without checksum error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Environment info
- `datasets` version: '1.18.3'
- Platform: Mac OS
- Python version: Python 3.8.9
- PyArrow version: '7.0.0'
| 179 | Checksum error for glue sst2, stsb, rte etc datasets
## Describe the bug
Checksum error for glue sst2, stsb, rte etc datasets
## Steps to reproduce the bug
```python
>>> nlp.load_dataset('glue', 'sst2')
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 73.0/73.0 [00:00<00:00, 18.2kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Expected results
dataset load should succeed without checksum error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare
verify_checksums(
File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8']
```
## Environment info
- `datasets` version: '1.18.3'
- Platform: Mac OS
- Python version: Python 3.8.9
- PyArrow version: '7.0.0'
Hi, @ravindra-ut.
I'm sorry but I can't reproduce your problem:
```python
In [1]: from datasets import load_dataset
In [2]: ds = load_dataset("glue", "sst2")
Downloading builder script: 28.8kB [00:00, 11.6MB/s]
Downloading metadata: 28.7kB [00:00, 12.9MB/s]
Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.44M/7.44M [00:01<00:00, 5.82MB/s]
Dataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data.
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 895.96it/s]
In [3]: ds
Out[2]:
DatasetDict({
train: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 67349
})
validation: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 872
})
test: Dataset({
features: ['sentence', 'label', 'idx'],
num_rows: 1821
})
})
```
Moreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com
However, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229
Could you please try to update `datasets`
```shell
pip install -U datasets
```
and then force redownload
```python
ds = load_dataset("glue", "sst2", download_mode="force_redownload")
```
to update the cache?
Please, feel free to reopen this issue if the problem persists. |
https://github.com/huggingface/datasets/issues/3951 | Forked streaming datasets try to `open` data urls rather than use network | Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.
In this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this | ## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
```python
from multiprocessing import freeze_support
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
import torch.utils.data
# work around #3950
class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset):
pass
def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset:
return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling)
if __name__ == '__main__':
freeze_support()
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
ds = _ensure_format(ds)
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
I'd expect the dataset to load the url correctly and produce examples.
## Actual results
```
warnings.warn(
***** Running training *****
Num examples = 8000
Num Epochs = 9223372036854775807
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 1000
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__
for key, example in self._iter():
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter
yield from ex_iterable
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples
with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15.
0%| | 0/1000 [00:02<?, ?it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| 95 | Forked streaming datasets try to `open` data urls rather than use network
## Describe the bug
Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else.
## Steps to reproduce the bug
```python
from multiprocessing import freeze_support
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
import torch.utils.data
# work around #3950
class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset):
pass
def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset:
return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling)
if __name__ == '__main__':
freeze_support()
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
ds = _ensure_format(ds)
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
I'd expect the dataset to load the url correctly and produce examples.
## Actual results
```
warnings.warn(
***** Running training *****
Num examples = 8000
Num Epochs = 9223372036854775807
Instantaneous batch size per device = 8
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 1
Total optimization steps = 1000
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__
for key, example in self._iter():
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter
yield from ex_iterable
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples
with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll
pid, sts = os.waitpid(self.pid, flag)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15.
0%| | 0/1000 [00:02<?, ?it/s]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.
In this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this |
https://github.com/huggingface/datasets/issues/3950 | Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1 | Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too
We should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)
I'm also taking a look at your second issue, which is more technical | ## Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
## Steps to reproduce the bug
```python
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error.
## Actual results
```
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
return self._get_iterator()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__
w.start()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset'
0%| | 0/1000 [00:00<?, ?it/s]
```
This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together)
Note that if you bypass this crash you get another crash. (I'll file a separate bug).
## Environment info
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| 55 | Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
## Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
## Steps to reproduce the bug
```python
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
```
## Expected results
For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error.
## Actual results
```
0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module>
Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train
for step, inputs in enumerate(epoch_iterator):
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
return self._get_iterator()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__
w.start()
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset'
0%| | 0/1000 [00:00<?, ?it/s]
```
This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together)
Note that if you bypass this crash you get another crash. (I'll file a separate bug).
## Environment info
- `datasets` version: 2.0.0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too
We should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)
I'm also taking a look at your second issue, which is more technical |
https://github.com/huggingface/datasets/issues/3942 | reddit_tifu dataset: Checksums didn't match for dataset source files | Hi @XingxingZhang,
We have already fixed this. You should update `datasets` version to at least 1.18.4:
```shell
pip install -U datasets
```
And then force the redownload:
```python
load_dataset("...", download_mode="force_redownload")
```
Duplicate of:
- #3773 | ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| 35 | reddit_tifu dataset: Checksums didn't match for dataset source files
## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
Hi @XingxingZhang,
We have already fixed this. You should update `datasets` version to at least 1.18.4:
```shell
pip install -U datasets
```
And then force the redownload:
```python
load_dataset("...", download_mode="force_redownload")
```
Duplicate of:
- #3773 |
https://github.com/huggingface/datasets/issues/3942 | reddit_tifu dataset: Checksums didn't match for dataset source files | thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset("...", download_mode="force_redownload")` fixed
the bug.
using the following as you suggested in another thread can also fixed the bug
```
pip install git+https://github.com/huggingface/datasets#egg=datasets
```
| ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| 33 | reddit_tifu dataset: Checksums didn't match for dataset source files
## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset("...", download_mode="force_redownload")` fixed
the bug.
using the following as you suggested in another thread can also fixed the bug
```
pip install git+https://github.com/huggingface/datasets#egg=datasets
```
|
https://github.com/huggingface/datasets/issues/3942 | reddit_tifu dataset: Checksums didn't match for dataset source files | The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.
You can now install from PyPI, as usual:
```shell
pip install -U datasets
```
| ## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| 49 | reddit_tifu dataset: Checksums didn't match for dataset source files
## Describe the bug
When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files"
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
# load_dataset('billsum')
load_dataset('reddit_tifu', 'short')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.
You can now install from PyPI, as usual:
```shell
pip install -U datasets
```
|
https://github.com/huggingface/datasets/issues/3941 | billsum dataset: Checksums didn't match for dataset source files: | Hi @XingxingZhang, thanks for reporting.
This was due to a change in Google Drive service:
- #3786
We have already fixed it:
- #3787
You should update `datasets` version to at least 1.18.4:
```shell
pip install -U datasets
```
And then force the redownload:
```python
load_dataset("...", download_mode="force_redownload")
``` | ## Describe the bug
When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files"
```
File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx']
```
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
load_dataset('billsum')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
| 48 | billsum dataset: Checksums didn't match for dataset source files:
## Describe the bug
When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files"
```
File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx']
```
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
print(datasets.__version__)
load_dataset('billsum')
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: mac os
- Python version: Python 3.7.6
- PyArrow version: 3.0.0
Hi @XingxingZhang, thanks for reporting.
This was due to a change in Google Drive service:
- #3786
We have already fixed it:
- #3787
You should update `datasets` version to at least 1.18.4:
```shell
pip install -U datasets
```
And then force the redownload:
```python
load_dataset("...", download_mode="force_redownload")
``` |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | Thanks for reporting @qqaatw.
@mishig25 @sgugger do you think this can be tweaked in the new doc framework?
- From: https://github.com/huggingface/datasets/blob/v2.0.0/
- To: https://github.com/huggingface/datasets/blob/2.0.0/ | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 24 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
Thanks for reporting @qqaatw.
@mishig25 @sgugger do you think this can be tweaked in the new doc framework?
- From: https://github.com/huggingface/datasets/blob/v2.0.0/
- To: https://github.com/huggingface/datasets/blob/2.0.0/ |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | @qqaatw thanks a lot for notifying about this issue!
in comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).
Therefore, we have to do one of 2 options below:
1. Make necessary changes on doc-builder side
OR
2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)
I'll let you decide @albertvillanova @lhoestq @sgugger | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 64 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
@qqaatw thanks a lot for notifying about this issue!
in comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).
Therefore, we have to do one of 2 options below:
1. Make necessary changes on doc-builder side
OR
2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)
I'll let you decide @albertvillanova @lhoestq @sgugger |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-) | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 45 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-) |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | For me it is OK to conform to the rest of libraries and tag/release with a preceding "v", rather than adding an extra argument to the doc builder just for `datasets`.
Let me know if it is also OK for you @lhoestq. | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 42 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
For me it is OK to conform to the rest of libraries and tag/release with a preceding "v", rather than adding an extra argument to the doc builder just for `datasets`.
Let me know if it is also OK for you @lhoestq. |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.
I think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 111 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.
I think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine |
https://github.com/huggingface/datasets/issues/3939 | Source links broken | This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).
Note that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release) | ## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| 66 | Source links broken
## Describe the bug
The source links of v2.0.0 docs are broken:
For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747`
here, the `v2.0.0` should be `2.0.0`.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747`
## Actual results
Described above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).
Note that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release) |
https://github.com/huggingface/datasets/issues/3937 | Missing languages in lvwerra/github-code dataset | That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.
Thanks for reporting this @Eytan-S! | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | 57 | Missing languages in lvwerra/github-code dataset
Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks!
That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.
Thanks for reporting this @Eytan-S! |
https://github.com/huggingface/datasets/issues/3937 | Missing languages in lvwerra/github-code dataset | Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:
```Python
{'Assembly': 82847,
'Batchfile': 236755,
'C': 14127969,
'C#': 6793439,
'C++': 7368473,
'CMake': 175076,
'CSS': 1733625,
'Dockerfile': 331966,
'FORTRAN': 141963,
'GO': 2259363,
'Haskell': 340521,
'HTML': 11165464,
'Java': 19515696,
'JavaScript': 11829024,
'Julia': 58177,
'Lua': 576279,
'Makefile': 679338,
'Markdown': 8454049,
'PHP': 11181930,
'Perl': 497490,
'PowerShell': 136827,
'Python': 7203553,
'Ruby': 4479767,
'Rust': 321765,
'SQL': 655657,
'Scala': 0,
'Shell': 1382786,
'TypeScript': 0,
'TeX': 250764,
'Visual Basic': 155371}
``` | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | 82 | Missing languages in lvwerra/github-code dataset
Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks!
Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:
```Python
{'Assembly': 82847,
'Batchfile': 236755,
'C': 14127969,
'C#': 6793439,
'C++': 7368473,
'CMake': 175076,
'CSS': 1733625,
'Dockerfile': 331966,
'FORTRAN': 141963,
'GO': 2259363,
'Haskell': 340521,
'HTML': 11165464,
'Java': 19515696,
'JavaScript': 11829024,
'Julia': 58177,
'Lua': 576279,
'Makefile': 679338,
'Markdown': 8454049,
'PHP': 11181930,
'Perl': 497490,
'PowerShell': 136827,
'Python': 7203553,
'Ruby': 4479767,
'Rust': 321765,
'SQL': 655657,
'Scala': 0,
'Shell': 1382786,
'TypeScript': 0,
'TeX': 250764,
'Visual Basic': 155371}
``` |
https://github.com/huggingface/datasets/issues/3937 | Missing languages in lvwerra/github-code dataset | @Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:
| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.7 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.7 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.7 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 | | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | 292 | Missing languages in lvwerra/github-code dataset
Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks!
@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:
| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.7 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.7 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.7 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 | |
https://github.com/huggingface/datasets/issues/3929 | Load a local dataset twice | Hi @caush, thanks for reporting:
In order to load local CSV files, you can use our "csv" loading script: https://huggingface.co/docs/datasets/loading#csv
```python
dataset = load_dataset("csv", data_files=["data/file1.csv", "data/file2.csv"])
```
OR:
```python
dataset = load_dataset("csv", data_dir="data")
```
Alternatively, you may also use:
```python
dataset = load_dataset("data") | ## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected results
Should give something like (because files have only one data row):
Title, clicks
Truc et astuce, 123
Machin, 12
## Actual results
Gives
Title, clicks
Truc et astuce, 123
Machin, 12
Truc et astuce, 123
Machin, 12
## Environment info
[file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv)
[file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv)
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1 | 43 | Load a local dataset twice
## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected results
Should give something like (because files have only one data row):
Title, clicks
Truc et astuce, 123
Machin, 12
## Actual results
Gives
Title, clicks
Truc et astuce, 123
Machin, 12
Truc et astuce, 123
Machin, 12
## Environment info
[file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv)
[file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv)
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
Hi @caush, thanks for reporting:
In order to load local CSV files, you can use our "csv" loading script: https://huggingface.co/docs/datasets/loading#csv
```python
dataset = load_dataset("csv", data_files=["data/file1.csv", "data/file2.csv"])
```
OR:
```python
dataset = load_dataset("csv", data_dir="data")
```
Alternatively, you may also use:
```python
dataset = load_dataset("data") |
https://github.com/huggingface/datasets/issues/3928 | Frugal score deprecations | Hi @Ierezell, thanks for reporting.
I'm making a PR to suppress those logs from the terminal. | ## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"])
```
## Expected results
A clear and concise description of the expected results.
```
{'scores': [0.9946]}
```
## Actual results
Specify the actual results or traceback.
```
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 864.09ba/s]
Using amp half precision backend
The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 1
Batch size = 64
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 4644.85it/s]
{'scores': [0.9946]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| 16 | Frugal score deprecations
## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"])
```
## Expected results
A clear and concise description of the expected results.
```
{'scores': [0.9946]}
```
## Actual results
Specify the actual results or traceback.
```
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 864.09ba/s]
Using amp half precision backend
The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 1
Batch size = 64
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 4644.85it/s]
{'scores': [0.9946]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
Hi @Ierezell, thanks for reporting.
I'm making a PR to suppress those logs from the terminal. |
https://github.com/huggingface/datasets/issues/3920 | 'datasets.features' is not a package | Hi @Arij-Aladel,
You are using a very old version of our library `datasets`: 1.8.0
Current version is 2.0.0 (and the previous one was 1.18.4)
Please, try to update `datasets` library and check if the problem persists:
```shell
/env/bin/pip install -U datasets | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
| 41 | 'datasets.features' is not a package
@albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
Hi @Arij-Aladel,
You are using a very old version of our library `datasets`: 1.8.0
Current version is 2.0.0 (and the previous one was 1.18.4)
Please, try to update `datasets` library and check if the problem persists:
```shell
/env/bin/pip install -U datasets |
https://github.com/huggingface/datasets/issues/3920 | 'datasets.features' is not a package | The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
| 31 | 'datasets.features' is not a package
@albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply |
https://github.com/huggingface/datasets/issues/3919 | AttributeError: 'DatasetDict' object has no attribute 'features' | You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`.
For example
```python
ds = load_dataset('mnist')
ds.features
```
Returns
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()
----> 1 ds.features
AttributeError: 'DatasetDict' object has no attribute 'features'
```
If we look at the dataset variable, we see it is a `DatasetDict`:
```python
print(ds)
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 60000
})
test: Dataset({
features: ['image', 'label'],
num_rows: 10000
})
})
```
We can grab the features from a split by indexing into `train`:
```python
ds['train'].features
{'image': Image(decode=True, id=None),
'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}
```
Hope that helps | ## Describe the bug
Receiving the error when trying to check for Dataset features
## Steps to reproduce the bug
from datasets import Dataset
dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']])
dataset.features
## Expected results
A clear and concise description of the expected results.
## Actual results
Getting the following errror
AttributeError: 'DatasetDict' object has no attribute 'features'
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.18.4
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyArrow version: 6.0.1
| 129 | AttributeError: 'DatasetDict' object has no attribute 'features'
## Describe the bug
Receiving the error when trying to check for Dataset features
## Steps to reproduce the bug
from datasets import Dataset
dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']])
dataset.features
## Expected results
A clear and concise description of the expected results.
## Actual results
Getting the following errror
AttributeError: 'DatasetDict' object has no attribute 'features'
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.18.4
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyArrow version: 6.0.1
You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`.
For example
```python
ds = load_dataset('mnist')
ds.features
```
Returns
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()
----> 1 ds.features
AttributeError: 'DatasetDict' object has no attribute 'features'
```
If we look at the dataset variable, we see it is a `DatasetDict`:
```python
print(ds)
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 60000
})
test: Dataset({
features: ['image', 'label'],
num_rows: 10000
})
})
```
We can grab the features from a split by indexing into `train`:
```python
ds['train'].features
{'image': Image(decode=True, id=None),
'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}
```
Hope that helps |
https://github.com/huggingface/datasets/issues/3918 | datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files | Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:
```bash
pip install git+https://github.com/huggingface/datasets.git
``` | ## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=load_dataset("reddit_tifu", "long")
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
- `datasets` version: 1.18.4
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.0
- PyArrow version: 6.0.1
| 38 | datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=load_dataset("reddit_tifu", "long")
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
- `datasets` version: 1.18.4
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.0
- PyArrow version: 6.0.1
Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:
```bash
pip install git+https://github.com/huggingface/datasets.git
``` |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ? | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | 30 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ? |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
import soundfile as sf
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
The code is taken directly from "https://huggingface.co/facebook/s2t-small-librispeech-asr".
The short error code is "RuntimeError: Error opening '6930-75918-0000.flac': System error." (it can't find the first file), and I agree, I can't find the file either. The dataset has downloaded correctly (it says), but on the location, there are only ".arrow" files, no ".flac" files.
**Error message:**
```python
RuntimeError Traceback (most recent call last)
Input In [15], in <cell line: 16>()
13 batch["speech"] = speech
14 return batch
---> 16 librispeech_eval = librispeech_eval.map(map_to_array)
18 def map_to_pred(batch):
19 features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:1953, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1950 disable_tqdm = not logging.is_progress_bar_enabled()
1952 if num_proc is None or num_proc == 1:
-> 1953 return self._map_single(
1954 function=function,
1955 with_indices=with_indices,
1956 with_rank=with_rank,
1957 input_columns=input_columns,
1958 batched=batched,
1959 batch_size=batch_size,
1960 drop_last_batch=drop_last_batch,
1961 remove_columns=remove_columns,
1962 keep_in_memory=keep_in_memory,
1963 load_from_cache_file=load_from_cache_file,
1964 cache_file_name=cache_file_name,
1965 writer_batch_size=writer_batch_size,
1966 features=features,
1967 disable_nullable=disable_nullable,
1968 fn_kwargs=fn_kwargs,
1969 new_fingerprint=new_fingerprint,
1970 disable_tqdm=disable_tqdm,
1971 desc=desc,
1972 )
1973 else:
1975 def format_cache_file_name(cache_file_name, rank):
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:519, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
517 self: "Dataset" = kwargs.pop("self")
518 # apply actual function
--> 519 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
520 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
521 for dataset in datasets:
522 # Remove task templates if a column mapping of the template is no longer valid
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:486, in transmit_format.<locals>.wrapper(*args, **kwargs)
479 self_format = {
480 "type": self._format_type,
481 "format_kwargs": self._format_kwargs,
482 "columns": self._format_columns,
483 "output_all_columns": self._output_all_columns,
484 }
485 # apply actual function
--> 486 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
487 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
488 # re-apply format to the output
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
452 kwargs[fingerprint_name] = update_fingerprint(
453 self._fingerprint, transform, kwargs_for_fingerprint
454 )
456 # Call actual function
--> 458 out = func(self, *args, **kwargs)
460 # Update fingerprint of in-place transforms + update in-place history of transforms
462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:2318, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2316 if not batched:
2317 for i, example in enumerate(pbar):
-> 2318 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2319 if update_data:
2320 if i == 0:
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:2218, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2216 if with_rank:
2217 additional_args += (rank,)
-> 2218 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2219 if update_data is None:
2220 # Check if the function returns updated examples
2221 update_data = isinstance(processed_inputs, (Mapping, pa.Table))
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:1913, in Dataset.map.<locals>.decorate.<locals>.decorated(item, *args, **kwargs)
1909 decorated_item = (
1910 Example(item, features=self.features) if not batched else Batch(item, features=self.features)
1911 )
1912 # Use the LazyDict internally, while mapping the function
-> 1913 result = f(decorated_item, *args, **kwargs)
1914 # Return a standard dict
1915 return result.data if isinstance(result, LazyDict) else result
Input In [15], in map_to_array(batch)
11 def map_to_array(batch):
---> 12 speech, _ = sf.read(batch["file"])
13 batch["speech"] = speech
14 return batch
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:256, in read(file, frames, start, stop, dtype, always_2d, fill_value, out, samplerate, channels, format, subtype, endian, closefd)
170 def read(file, frames=-1, start=0, stop=None, dtype='float64', always_2d=False,
171 fill_value=None, out=None, samplerate=None, channels=None,
172 format=None, subtype=None, endian=None, closefd=True):
173 """Provide audio data from a sound file as NumPy array.
174
175 By default, the whole file is read from the beginning, but the
(...)
254
255 """
--> 256 with SoundFile(file, 'r', samplerate, channels,
257 subtype, endian, format, closefd) as f:
258 frames = f._prepare_read(start, stop, frames)
259 data = f.read(frames, dtype, always_2d, fill_value, out)
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
626 self._mode = mode
627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
--> 629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
632 self.seek(0)
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:1183, in SoundFile._open(self, file, mode_int, closefd)
1181 else:
1182 raise TypeError("Invalid file: {0!r}".format(self.name))
-> 1183 _error_check(_snd.sf_error(file_ptr),
1184 "Error opening {0!r}: ".format(self.name))
1185 if mode_int == _snd.SFM_WRITE:
1186 # Due to a bug in libsndfile version <= 1.0.25, frames != 0
1187 # when opening a named pipe in SFM_WRITE mode.
1188 # See http://github.com/erikd/libsndfile/issues/77.
1189 self._info.frames = 0
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:1357, in _error_check(err, prefix)
1355 if err != 0:
1356 err_str = _snd.sf_error_number(err)
-> 1357 raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening '6930-75918-0000.flac': System error.
```
**Package versions:**
```python
python: 3.9
transformers: 4.17.0
datasets: 2.0.0
SoundFile: 0.10.3.post1
```
| ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | 854 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor
import soundfile as sf
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"])
print("WER:", wer(predictions=result["transcription"], references=result["text"]))
```
The code is taken directly from "https://huggingface.co/facebook/s2t-small-librispeech-asr".
The short error code is "RuntimeError: Error opening '6930-75918-0000.flac': System error." (it can't find the first file), and I agree, I can't find the file either. The dataset has downloaded correctly (it says), but on the location, there are only ".arrow" files, no ".flac" files.
**Error message:**
```python
RuntimeError Traceback (most recent call last)
Input In [15], in <cell line: 16>()
13 batch["speech"] = speech
14 return batch
---> 16 librispeech_eval = librispeech_eval.map(map_to_array)
18 def map_to_pred(batch):
19 features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt")
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:1953, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1950 disable_tqdm = not logging.is_progress_bar_enabled()
1952 if num_proc is None or num_proc == 1:
-> 1953 return self._map_single(
1954 function=function,
1955 with_indices=with_indices,
1956 with_rank=with_rank,
1957 input_columns=input_columns,
1958 batched=batched,
1959 batch_size=batch_size,
1960 drop_last_batch=drop_last_batch,
1961 remove_columns=remove_columns,
1962 keep_in_memory=keep_in_memory,
1963 load_from_cache_file=load_from_cache_file,
1964 cache_file_name=cache_file_name,
1965 writer_batch_size=writer_batch_size,
1966 features=features,
1967 disable_nullable=disable_nullable,
1968 fn_kwargs=fn_kwargs,
1969 new_fingerprint=new_fingerprint,
1970 disable_tqdm=disable_tqdm,
1971 desc=desc,
1972 )
1973 else:
1975 def format_cache_file_name(cache_file_name, rank):
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:519, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
517 self: "Dataset" = kwargs.pop("self")
518 # apply actual function
--> 519 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
520 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
521 for dataset in datasets:
522 # Remove task templates if a column mapping of the template is no longer valid
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:486, in transmit_format.<locals>.wrapper(*args, **kwargs)
479 self_format = {
480 "type": self._format_type,
481 "format_kwargs": self._format_kwargs,
482 "columns": self._format_columns,
483 "output_all_columns": self._output_all_columns,
484 }
485 # apply actual function
--> 486 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
487 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
488 # re-apply format to the output
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
452 kwargs[fingerprint_name] = update_fingerprint(
453 self._fingerprint, transform, kwargs_for_fingerprint
454 )
456 # Call actual function
--> 458 out = func(self, *args, **kwargs)
460 # Update fingerprint of in-place transforms + update in-place history of transforms
462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:2318, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2316 if not batched:
2317 for i, example in enumerate(pbar):
-> 2318 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2319 if update_data:
2320 if i == 0:
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:2218, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2216 if with_rank:
2217 additional_args += (rank,)
-> 2218 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2219 if update_data is None:
2220 # Check if the function returns updated examples
2221 update_data = isinstance(processed_inputs, (Mapping, pa.Table))
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\datasets\arrow_dataset.py:1913, in Dataset.map.<locals>.decorate.<locals>.decorated(item, *args, **kwargs)
1909 decorated_item = (
1910 Example(item, features=self.features) if not batched else Batch(item, features=self.features)
1911 )
1912 # Use the LazyDict internally, while mapping the function
-> 1913 result = f(decorated_item, *args, **kwargs)
1914 # Return a standard dict
1915 return result.data if isinstance(result, LazyDict) else result
Input In [15], in map_to_array(batch)
11 def map_to_array(batch):
---> 12 speech, _ = sf.read(batch["file"])
13 batch["speech"] = speech
14 return batch
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:256, in read(file, frames, start, stop, dtype, always_2d, fill_value, out, samplerate, channels, format, subtype, endian, closefd)
170 def read(file, frames=-1, start=0, stop=None, dtype='float64', always_2d=False,
171 fill_value=None, out=None, samplerate=None, channels=None,
172 format=None, subtype=None, endian=None, closefd=True):
173 """Provide audio data from a sound file as NumPy array.
174
175 By default, the whole file is read from the beginning, but the
(...)
254
255 """
--> 256 with SoundFile(file, 'r', samplerate, channels,
257 subtype, endian, format, closefd) as f:
258 frames = f._prepare_read(start, stop, frames)
259 data = f.read(frames, dtype, always_2d, fill_value, out)
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)
626 self._mode = mode
627 self._info = _create_info_struct(file, mode, samplerate, channels,
628 format, subtype, endian)
--> 629 self._file = self._open(file, mode_int, closefd)
630 if set(mode).issuperset('r+') and self.seekable():
631 # Move write position to 0 (like in Python file objects)
632 self.seek(0)
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:1183, in SoundFile._open(self, file, mode_int, closefd)
1181 else:
1182 raise TypeError("Invalid file: {0!r}".format(self.name))
-> 1183 _error_check(_snd.sf_error(file_ptr),
1184 "Error opening {0!r}: ".format(self.name))
1185 if mode_int == _snd.SFM_WRITE:
1186 # Due to a bug in libsndfile version <= 1.0.25, frames != 0
1187 # when opening a named pipe in SFM_WRITE mode.
1188 # See http://github.com/erikd/libsndfile/issues/77.
1189 self._info.frames = 0
File C:\ProgramData\Miniconda3\envs\noise_cancel\lib\site-packages\soundfile.py:1357, in _error_check(err, prefix)
1355 if err != 0:
1356 err_str = _snd.sf_error_number(err)
-> 1357 raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening '6930-75918-0000.flac': System error.
```
**Package versions:**
```python
python: 3.9
transformers: 4.17.0
datasets: 2.0.0
SoundFile: 0.10.3.post1
```
|
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0]["audio"]["array"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)
cc @patrickvonplaten we will need to update the readme at [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) as well as https://huggingface.co/docs/transformers/model_doc/speech_to_text | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | 43 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0]["audio"]["array"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)
cc @patrickvonplaten we will need to update the readme at [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) as well as https://huggingface.co/docs/transformers/model_doc/speech_to_text |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | Thanks!
And sorry for posting this problem in what turned on to be an unrelated thread.
I rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.
The rewritten code:
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor, Wav2Vec2Processor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
def map_to_pred(batch):
audio = batch["audio"]
features = processor(audio["array"], sampling_rate=audio["sampling_rate"], padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred)#, batched=True, batch_size=8)
print("WER:", wer.compute(predictions=result["transcription"], references=result["text"]))
``` | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | 130 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
Thanks!
And sorry for posting this problem in what turned on to be an unrelated thread.
I rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.
The rewritten code:
```python
from datasets import load_dataset, load_metric
from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor, Wav2Vec2Processor
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") # change to "other" for other test dataset
wer = load_metric("wer")
model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr").to("cuda")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr", do_upper_case=True)
def map_to_pred(batch):
audio = batch["audio"]
features = processor(audio["array"], sampling_rate=audio["sampling_rate"], padding=True, return_tensors="pt")
input_features = features.input_features.to("cuda")
attention_mask = features.attention_mask.to("cuda")
gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)
return batch
result = librispeech_eval.map(map_to_pred)#, batched=True, batch_size=8)
print("WER:", wer.compute(predictions=result["transcription"], references=result["text"]))
``` |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for "transcription". You can fix it by adding `[0]` at the end of this line to get the string:
```python
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
``` | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | 45 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for "transcription". You can fix it by adding `[0]` at the end of this line to get the string:
```python
batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]
``` |
https://github.com/huggingface/datasets/issues/3909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | We no longer use `torchaudio` for decoding MP3 files, and the problem with model cards has been addressed, so I'm closing this issue. | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | 23 | Error loading file audio when downloading the Common Voice dataset directly from the Hub
## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0
We no longer use `torchaudio` for decoding MP3 files, and the problem with model cards has been addressed, so I'm closing this issue. |
https://github.com/huggingface/datasets/issues/3906 | NonMatchingChecksumError on Spider dataset | Hi @kolk, thanks for reporting.
Indeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:
- #3787
We just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4
Please, feel free to update your local `datasets` version, so that you get the fix:
```shell
pip install -U datasets
``` | ## Describe the bug
Failure to generate dataset ```spider``` because of checksums error for dataset source files.
## Steps to reproduce the bug
```
from datasets import load_dataset
spider = load_dataset("spider")
```
## Expected results
Checksums should match for files from url ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Actual results
```
>>> load_dataset("spider")
load_dataset("spider")
Downloading and preparing dataset spider/spider (download: 95.12 MiB, generated: 5.17 MiB, post-processed: Unknown size, total: 100.29 MiB) to /home/user/.cache/huggingface/datasets/spider/spider/1.0.0/79778ebea87c59b19411f1eb3eda317e9dd5f7788a556d837ef25c3ae6e5e8b7...
Traceback (most recent call last):
File "/home/user/py3_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-d4cb54197348>", line 1, in <module>
load_dataset("spider")
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
```
## Environment info
datasets version: 1.18.3
Platform: Ubuntu 20 LTS
Python version: 3.8.10
PyArrow version: 6.0.1
| 60 | NonMatchingChecksumError on Spider dataset
## Describe the bug
Failure to generate dataset ```spider``` because of checksums error for dataset source files.
## Steps to reproduce the bug
```
from datasets import load_dataset
spider = load_dataset("spider")
```
## Expected results
Checksums should match for files from url ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Actual results
```
>>> load_dataset("spider")
load_dataset("spider")
Downloading and preparing dataset spider/spider (download: 95.12 MiB, generated: 5.17 MiB, post-processed: Unknown size, total: 100.29 MiB) to /home/user/.cache/huggingface/datasets/spider/spider/1.0.0/79778ebea87c59b19411f1eb3eda317e9dd5f7788a556d837ef25c3ae6e5e8b7...
Traceback (most recent call last):
File "/home/user/py3_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-d4cb54197348>", line 1, in <module>
load_dataset("spider")
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
```
## Environment info
datasets version: 1.18.3
Platform: Ubuntu 20 LTS
Python version: 3.8.10
PyArrow version: 6.0.1
Hi @kolk, thanks for reporting.
Indeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:
- #3787
We just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4
Please, feel free to update your local `datasets` version, so that you get the fix:
```shell
pip install -U datasets
``` |
https://github.com/huggingface/datasets/issues/3904 | CONLL2003 Dataset not available | Thanks for reporting, @omarespejel.
I'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip
Might it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?
Could you please try loading the dataset again and tell if the problem persists? | ## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'
![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("conll2003")
```
## Expected results
Download the conll2003 dataset.
## Actual results
Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
| 61 | CONLL2003 Dataset not available
## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'
![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("conll2003")
```
## Expected results
Download the conll2003 dataset.
## Actual results
Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
Thanks for reporting, @omarespejel.
I'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip
Might it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?
Could you please try loading the dataset again and tell if the problem persists? |
https://github.com/huggingface/datasets/issues/3904 | CONLL2003 Dataset not available | I am getting the same issue. I use google colab with CPU.
The code I used is exactly the same as described above.
```
from datasets import load_dataset
dataset = load_dataset("conll2003")
```
The produced error:
![image](https://github.com/huggingface/datasets/assets/9371628/d87f7fb0-ef58-4755-abb5-f8f92c51fe02)
Note: This error is different from what was initially described in this thread. This is because I use CPU. When I use GPU I reproduce the same initial error of the thread.
Moreover, I receive the following warning:
```
WARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}
Downloading and preparing dataset conll2003/conll2003 to /root/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98...
WARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}
```
| ## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'
![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("conll2003")
```
## Expected results
Download the conll2003 dataset.
## Actual results
Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
| 193 | CONLL2003 Dataset not available
## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'
![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("conll2003")
```
## Expected results
Download the conll2003 dataset.
## Actual results
Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
I am getting the same issue. I use google colab with CPU.
The code I used is exactly the same as described above.
```
from datasets import load_dataset
dataset = load_dataset("conll2003")
```
The produced error:
![image](https://github.com/huggingface/datasets/assets/9371628/d87f7fb0-ef58-4755-abb5-f8f92c51fe02)
Note: This error is different from what was initially described in this thread. This is because I use CPU. When I use GPU I reproduce the same initial error of the thread.
Moreover, I receive the following warning:
```
WARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}
Downloading and preparing dataset conll2003/conll2003 to /root/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98...
WARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}
```
|
https://github.com/huggingface/datasets/issues/3902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | Update: `"python3 -c "from from datasets import Dataset, DatasetDict"` works, but not if I import without the `python3 -c` | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| 19 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
Update: `"python3 -c "from from datasets import Dataset, DatasetDict"` works, but not if I import without the `python3 -c` |
https://github.com/huggingface/datasets/issues/3902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | Hi @arunasank, thanks for reporting.
It seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).
In order to fix this, you should update `fsspec` from within the "problematic" Python virtual env:
```
pip install -U "fsspec[http]>=2021.05.0" | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| 88 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
Hi @arunasank, thanks for reporting.
It seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).
In order to fix this, you should update `fsspec` from within the "problematic" Python virtual env:
```
pip install -U "fsspec[http]>=2021.05.0" |
https://github.com/huggingface/datasets/issues/3902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | from lightgbm import LGBMModel,LGBMClassifier, plot_importance
after importing lib getting (partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) error, can help me | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| 28 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
from lightgbm import LGBMModel,LGBMClassifier, plot_importance
after importing lib getting (partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) error, can help me |
https://github.com/huggingface/datasets/issues/3902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | @deepakmahtha I think you are not using `datasets`: this is the GitHub repository of Hugging Face Datasets.
If you are using `lightgbm`, you should report the issue to their repository instead.
Anyway, we have proposed a possible fix just in a comment above: to update fsspec.
https://github.com/huggingface/datasets/issues/3902#issuecomment-1066517824 | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| 47 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
@deepakmahtha I think you are not using `datasets`: this is the GitHub repository of Hugging Face Datasets.
If you are using `lightgbm`, you should report the issue to their repository instead.
Anyway, we have proposed a possible fix just in a comment above: to update fsspec.
https://github.com/huggingface/datasets/issues/3902#issuecomment-1066517824 |
https://github.com/huggingface/datasets/issues/3901 | Dataset viewer issue for IndicParaphrase- the preview doesn't show | It seems to have been fixed:
<img width="1534" alt="Capture dβeΜcran 2022-04-12 aΜ 14 10 07" src="https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png">
| ## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'*
Am I the one who added this dataset ? Yes
| 16 | Dataset viewer issue for IndicParaphrase- the preview doesn't show
## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'*
Am I the one who added this dataset ? Yes
It seems to have been fixed:
<img width="1534" alt="Capture dβeΜcran 2022-04-12 aΜ 14 10 07" src="https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png">
|
https://github.com/huggingface/datasets/issues/3896 | Missing google file for `multi_news` dataset | `datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.
When loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :) | ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
| 29 | Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
`datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.
When loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :) |
https://github.com/huggingface/datasets/issues/3896 | Missing google file for `multi_news` dataset | That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...
Once merged, that will fix this issue. | ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
| 25 | Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...
Once merged, that will fix this issue. |
https://github.com/huggingface/datasets/issues/3896 | Missing google file for `multi_news` dataset | OK. Should fix the viewer for 50 datasets
<img width="148" alt="Capture dβeΜcran 2022-03-14 aΜ 11 51 02" src="https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png">
| ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
| 18 | Missing google file for `multi_news` dataset
## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
OK. Should fix the viewer for 50 datasets
<img width="148" alt="Capture dβeΜcran 2022-03-14 aΜ 11 51 02" src="https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png">
|
https://github.com/huggingface/datasets/issues/3889 | Cannot load beans dataset (Couldn't reach the dataset) | Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :) | ## Describe the bug
The beans dataset is unavailable to download.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('beans')
```
## Expected results
The dataset would be downloaded with no issue.
## Actual results
```
ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403)
```
[It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip )
## Environment info
Google Colab
| 23 | Cannot load beans dataset (Couldn't reach the dataset)
## Describe the bug
The beans dataset is unavailable to download.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('beans')
```
## Expected results
The dataset would be downloaded with no issue.
## Actual results
```
ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403)
```
[It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip )
## Environment info
Google Colab
Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :) |
https://github.com/huggingface/datasets/issues/3888 | IterableDataset columns and feature types | @lhoestq so in order to address whatβs not completed in this issue, do you think it makes sense to add a param `features` to `IterableDataset.map` so that the output features right after the `map` are defined there? | Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova | 37 | IterableDataset columns and feature types
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova
@lhoestq so in order to address whatβs not completed in this issue, do you think it makes sense to add a param `features` to `IterableDataset.map` so that the output features right after the `map` are defined there? |
https://github.com/huggingface/datasets/issues/3888 | IterableDataset columns and feature types | @lhoestq cool then if you agree I can work on that! Iβll also update the docs accordingly once done, thanks! | Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova | 20 | IterableDataset columns and feature types
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova
@lhoestq cool then if you agree I can work on that! Iβll also update the docs accordingly once done, thanks! |
https://github.com/huggingface/datasets/issues/3888 | IterableDataset columns and feature types | I've already started with a PR as a draft @lhoestq, should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so? Thanks! | Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova | 43 | IterableDataset columns and feature types
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova
I've already started with a PR as a draft @lhoestq, should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so? Thanks! |
https://github.com/huggingface/datasets/issues/3888 | IterableDataset columns and feature types | > should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so?
Right now one can use `ds = ds._resolve_features()` do to so. It can be used after `map` or `load_dataset` if the features are not known. Maybe we can make this method public ? | Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova | 66 | IterableDataset columns and feature types
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova
> should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so?
Right now one can use `ds = ds._resolve_features()` do to so. It can be used after `map` or `load_dataset` if the features are not known. Maybe we can make this method public ? |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Hi @INF800,
Please note that the `imagefolder` feature enhancement was just recently merged to our master branch (https://github.com/huggingface/datasets/commit/207be676bffe9d164740a41a883af6125edef135), but has not yet been released.
We are planning to make the 2.0 release of our library in the coming days and then that feature will be available by updating your `datasets` library from PyPI.
In the meantime, you can incorporate that feature if you install our library from our GitHub master branch:
```shell
pip install git+https://github.com/huggingface/datasets#egg=datasets
```
Then:
```python
In [1]: from datasets import load_dataset
ds = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", split="train")
Using custom data configuration default-7eb4e80d960deb18
Downloading and preparing dataset image_folder/default to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60...
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 690.19it/s]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 852.85it/s]
Dataset image_folder downloaded and prepared to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60. Subsequent calls will reuse this data.
In [2]: ds
Out[2]:
Dataset({
features: ['image', 'label'],
num_rows: 25000
})
``` | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 140 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
Hi @INF800,
Please note that the `imagefolder` feature enhancement was just recently merged to our master branch (https://github.com/huggingface/datasets/commit/207be676bffe9d164740a41a883af6125edef135), but has not yet been released.
We are planning to make the 2.0 release of our library in the coming days and then that feature will be available by updating your `datasets` library from PyPI.
In the meantime, you can incorporate that feature if you install our library from our GitHub master branch:
```shell
pip install git+https://github.com/huggingface/datasets#egg=datasets
```
Then:
```python
In [1]: from datasets import load_dataset
ds = load_dataset("imagefolder", data_files="https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip", split="train")
Using custom data configuration default-7eb4e80d960deb18
Downloading and preparing dataset image_folder/default to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60...
Downloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 690.19it/s]
Extracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 852.85it/s]
Dataset image_folder downloaded and prepared to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60. Subsequent calls will reuse this data.
In [2]: ds
Out[2]:
Dataset({
features: ['image', 'label'],
num_rows: 25000
})
``` |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Hey @albertvillanova. Does this load entire dataset in memory? Because I am facing huge trouble with loading very big datasets (OOM errors) | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 22 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
Hey @albertvillanova. Does this load entire dataset in memory? Because I am facing huge trouble with loading very big datasets (OOM errors) |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Can you provide the error stack trace? The loader only stores the `data_files` dict, which can get big after globbing. Then, the OOM error would mean you don't have enough memory to keep all the paths to the image files. You can circumvent this by generating an archive and loading the dataset from there. Maybe we can optimize the globbing part in our data files resolution at some point, cc @lhoestq for visibility. | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 73 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
Can you provide the error stack trace? The loader only stores the `data_files` dict, which can get big after globbing. Then, the OOM error would mean you don't have enough memory to keep all the paths to the image files. You can circumvent this by generating an archive and loading the dataset from there. Maybe we can optimize the globbing part in our data files resolution at some point, cc @lhoestq for visibility. |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | Hey, memory error is resolved. It was fluke.
But there is another issue. Currently `load_dataset("imagefolder", data_dir="./path/to/train",)` takes only `train` as arg to `split` parameter.
I am creating vaildation dataset using
```
ds_valid = datasets.DatasetDict(valid=load_dataset("imagefolder", data_dir="./path/to/valid",)['train'])
``` | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 36 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
Hey, memory error is resolved. It was fluke.
But there is another issue. Currently `load_dataset("imagefolder", data_dir="./path/to/train",)` takes only `train` as arg to `split` parameter.
I am creating vaildation dataset using
```
ds_valid = datasets.DatasetDict(valid=load_dataset("imagefolder", data_dir="./path/to/valid",)['train'])
``` |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | `data_dir="path/to/folder"` is a shorthand syntax fox `data_files={"train": "path/to/folder/**"}`, so use `data_files` in that case instead:
```python
ds = load_dataset("imagefolder", data_files={"train": "path/to/train/**", "test": "path/to/test/**", "valid": "path/to/valid/**"})
``` | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 26 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
`data_dir="path/to/folder"` is a shorthand syntax fox `data_files={"train": "path/to/folder/**"}`, so use `data_files` in that case instead:
```python
ds = load_dataset("imagefolder", data_files={"train": "path/to/train/**", "test": "path/to/test/**", "valid": "path/to/valid/**"})
``` |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | And there was another issue. I loaded black and white images (jpeg file). Using load dataset. It reads it as PIL jpeg data format. But instead of converting it into 3 channel tensor, input to collator function is coming as a single channel tensor. | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 44 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
And there was another issue. I loaded black and white images (jpeg file). Using load dataset. It reads it as PIL jpeg data format. But instead of converting it into 3 channel tensor, input to collator function is coming as a single channel tensor. |
https://github.com/huggingface/datasets/issues/3881 | How to use Image folder | We don't apply any additional preprocessing on top of `PIL.Image.open(image_file)`, so you need to do the conversion yourself:
```python
def to_rgb(batch):
batch["image"] = [img.convert("RGB") for img in batch["image"]]
return batch
ds_rgb = ds.map(to_rgb, batched=True)
```
Please use our Forum for questions of this kind in the future. | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | 47 | How to use Image folder
Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
```
We don't apply any additional preprocessing on top of `PIL.Image.open(image_file)`, so you need to do the conversion yourself:
```python
def to_rgb(batch):
batch["image"] = [img.convert("RGB") for img in batch["image"]]
return batch
ds_rgb = ds.map(to_rgb, batched=True)
```
Please use our Forum for questions of this kind in the future. |
https://github.com/huggingface/datasets/issues/3872 | HTTP error 504 Server Error: Gateway Time-out | yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`? | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
| 18 | HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`? |
https://github.com/huggingface/datasets/issues/3872 | HTTP error 504 Server Error: Gateway Time-out | Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
| 44 | HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line |
https://github.com/huggingface/datasets/issues/3872 | HTTP error 504 Server Error: Gateway Time-out | `push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.
Regarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`. | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
| 93 | HTTP error 504 Server Error: Gateway Time-out
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
`push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.
Regarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`. |
https://github.com/huggingface/datasets/issues/3869 | Making the Hub the place for datasets in Portuguese | Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many datasets, I would suggest to either create an issue for their datasets, or even better, we are trying to push to upload datasets as community datasets instead of adding them to the core library as guided in https://huggingface.co/docs/datasets/share. That would have the additional benefit that the dataset would live under the NILC organization.
@lhoestq correct me if I'm wrong please π | Let's make Hugging Face Datasets the central hub for datasets in Portuguese :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community.
What are some datasets in Portuguese worth integrating into the Hugging Face hub?
Special thanks to @augusnunes for his collaboration on identifying the first ones:
- [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources).
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
cc @osanseviero
| 114 | Making the Hub the place for datasets in Portuguese
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community.
What are some datasets in Portuguese worth integrating into the Hugging Face hub?
Special thanks to @augusnunes for his collaboration on identifying the first ones:
- [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources).
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
cc @osanseviero
Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many datasets, I would suggest to either create an issue for their datasets, or even better, we are trying to push to upload datasets as community datasets instead of adding them to the core library as guided in https://huggingface.co/docs/datasets/share. That would have the additional benefit that the dataset would live under the NILC organization.
@lhoestq correct me if I'm wrong please π |
https://github.com/huggingface/datasets/issues/3861 | big_patent cased version | To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.
See the paper describing the issue here:
https://aclanthology.org/2022.gem-1.34/ | Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version? | 39 | big_patent cased version
Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.
See the paper describing the issue here:
https://aclanthology.org/2022.gem-1.34/ |
https://github.com/huggingface/datasets/issues/3861 | big_patent cased version | Thanks for proposing the addition of the cased version of this dataset and for pinging again recently.
I have just merged a PR that adds the cased version: https://huggingface.co/datasets/big_patent/discussions/3
The cased version (2.1.2) is the default one:
```python
ds = load_dataset("big_patent", "all")
```
To use the 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:
```python
ds = load_dataset("big_patent", codes="all", version="1.0.0")
```
Closed by: https://huggingface.co/datasets/big_patent/discussions/3 | Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version? | 68 | big_patent cased version
Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
Thanks for proposing the addition of the cased version of this dataset and for pinging again recently.
I have just merged a PR that adds the cased version: https://huggingface.co/datasets/big_patent/discussions/3
The cased version (2.1.2) is the default one:
```python
ds = load_dataset("big_patent", "all")
```
To use the 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:
```python
ds = load_dataset("big_patent", codes="all", version="1.0.0")
```
Closed by: https://huggingface.co/datasets/big_patent/discussions/3 |
https://github.com/huggingface/datasets/issues/3859 | Unable to dowload big_patent (FileNotFoundError) | Hi @slvcsl, thanks for reporting.
Yesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.
https://pypi.org/project/datasets/#history
Please, feel free to update `datasets` library to the latest version:
```shell
pip install -U datasets
```
And then you should force redownload of the data file to update your local cache:
```python
ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
```
- Note that before the fix, you just downloaded and cached the Google Drive virus scan warning page, instead of the data file
This issue was already reported
- #3784
and its root cause is a change in the Google Drive service. See:
- #3786
We already fixed it. See:
- #3787
| ## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundError Traceback (most recent call last)
[<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
8 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1705 ignore_verifications=ignore_verifications,
1706 try_from_hf_gcs=try_from_hf_gcs,
-> 1707 use_auth_token=use_auth_token,
1708 )
1709
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
593 if not downloaded_from_gcs:
594 self._download_and_prepare(
--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
596 )
597 # Sync info
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
659 split_dict = SplitDict(dataset_name=self.name)
660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
662
663 # Checksums verification
[/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
123 split_types = ["train", "val", "test"]
124 extract_paths = dl_manager.extract(
--> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types}
126 )
127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types}
[/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc)
282 download_config.extract_compressed_file = True
283 extracted_paths = map_nested(
--> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
285 )
286 path_or_paths = NestedDataStructure(path_or_paths)
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args)
194 # Singleton first to spare some computation
195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 196 return function(data_struct)
197
198 # Reduce logging to keep things readable in multiprocessing with tqdm
[/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs)
314 elif is_local_path(url_or_filename):
315 # File, but it doesn't exist.
--> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
317 else:
318 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist
I have tried this in a number of machines, including on Colab, so I think this is not environment dependent.
How do I load the bigPatent dataset? | 115 | Unable to dowload big_patent (FileNotFoundError)
## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundError Traceback (most recent call last)
[<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
8 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1705 ignore_verifications=ignore_verifications,
1706 try_from_hf_gcs=try_from_hf_gcs,
-> 1707 use_auth_token=use_auth_token,
1708 )
1709
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
593 if not downloaded_from_gcs:
594 self._download_and_prepare(
--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
596 )
597 # Sync info
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
659 split_dict = SplitDict(dataset_name=self.name)
660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
662
663 # Checksums verification
[/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
123 split_types = ["train", "val", "test"]
124 extract_paths = dl_manager.extract(
--> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types}
126 )
127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types}
[/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc)
282 download_config.extract_compressed_file = True
283 extracted_paths = map_nested(
--> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
285 )
286 path_or_paths = NestedDataStructure(path_or_paths)
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args)
194 # Singleton first to spare some computation
195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 196 return function(data_struct)
197
198 # Reduce logging to keep things readable in multiprocessing with tqdm
[/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs)
314 elif is_local_path(url_or_filename):
315 # File, but it doesn't exist.
--> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
317 else:
318 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist
I have tried this in a number of machines, including on Colab, so I think this is not environment dependent.
How do I load the bigPatent dataset?
Hi @slvcsl, thanks for reporting.
Yesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.
https://pypi.org/project/datasets/#history
Please, feel free to update `datasets` library to the latest version:
```shell
pip install -U datasets
```
And then you should force redownload of the data file to update your local cache:
```python
ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
```
- Note that before the fix, you just downloaded and cached the Google Drive virus scan warning page, instead of the data file
This issue was already reported
- #3784
and its root cause is a change in the Google Drive service. See:
- #3786
We already fixed it. See:
- #3787
|
https://github.com/huggingface/datasets/issues/3857 | Order of dataset changes due to glob.glob. | I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.
Note that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()` | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken):
https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483 | 48 | Order of dataset changes due to glob.glob.
## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken):
https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483
I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.
Note that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()` |
https://github.com/huggingface/datasets/issues/3855 | Bad error message when loading private dataset | We raise the error β FileNotFoundError: canβt find the datasetβ mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)
We can indeed reformulate this and add the "If this is a private repository,..." part ! | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command then fails with:
```bash
FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub
```
**even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org.
We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO.
## Steps to reproduce the bug
E.g. execute the following code to see the different error messages between `transformes` and `datasets`.
1. Transformers
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
The error message is clearer here - it gives:
```
OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Let's maybe do the same for datasets? The PR was introduced to `transformers` here:
https://github.com/huggingface/transformers/pull/15261
## Expected results
Better error message
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
| 46 | Bad error message when loading private dataset
## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command then fails with:
```bash
FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub
```
**even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org.
We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO.
## Steps to reproduce the bug
E.g. execute the following code to see the different error messages between `transformes` and `datasets`.
1. Transformers
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
The error message is clearer here - it gives:
```
OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Let's maybe do the same for datasets? The PR was introduced to `transformers` here:
https://github.com/huggingface/transformers/pull/15261
## Expected results
Better error message
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
We raise the error β FileNotFoundError: canβt find the datasetβ mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)
We can indeed reformulate this and add the "If this is a private repository,..." part ! |
https://github.com/huggingface/datasets/issues/3854 | load only England English dataset from common voice english dataset | Hi @amanjaiswal777,
First note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.
Currently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation
For example, to get their latest Common Voice relase (8.0):
- Go to the dataset page and request access permission (Mozilla Foundation requires this for people willing to use their datasets): https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
- Looking at the dataset card, you can check that data instances have, among other fields, the ones you are interested in: "accent", "age",...
- Then you can load their "en" language dataset as usual, besides passing your authentication token (more info on auth token here: https://huggingface.co/docs/hub/security)
```python
from datasets import load_dataset
ds_en = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token=True)
```
- Finally, you can filter only the data instances you are interested in (more info on `filter` here: https://huggingface.co/docs/datasets/process#select-and-filter):
```python
ds_england_en = ds_en.filter(lambda item: item["accent"] == "England English")
```
Feel free to reopen this issue if you need further assistance. | training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Proportions:**
- 24% United States English
- 8% England English
- 5% India and South Asia (India, Pakistan, Sri Lanka)
- 3% Australian English
- 3% Canadian English
- 2% Scottish English
- 1% Irish English
- 1% Southern African (South Africa, Zimbabwe, Namibia)
- 1% New Zealand English
Can we replicate this for Age as well?
**Age proportions of the common voice:-**
- 24% 19 - 29
- 14% 30 - 39
- 10% 40 - 49
- 6% < 19
- 4% 50 - 59
- 4% 60 - 69
- 1% 70 β 79 | 172 | load only England English dataset from common voice english dataset
training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Proportions:**
- 24% United States English
- 8% England English
- 5% India and South Asia (India, Pakistan, Sri Lanka)
- 3% Australian English
- 3% Canadian English
- 2% Scottish English
- 1% Irish English
- 1% Southern African (South Africa, Zimbabwe, Namibia)
- 1% New Zealand English
Can we replicate this for Age as well?
**Age proportions of the common voice:-**
- 24% 19 - 29
- 14% 30 - 39
- 10% 40 - 49
- 6% < 19
- 4% 50 - 59
- 4% 60 - 69
- 1% 70 β 79
Hi @amanjaiswal777,
First note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.
Currently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation
For example, to get their latest Common Voice relase (8.0):
- Go to the dataset page and request access permission (Mozilla Foundation requires this for people willing to use their datasets): https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
- Looking at the dataset card, you can check that data instances have, among other fields, the ones you are interested in: "accent", "age",...
- Then you can load their "en" language dataset as usual, besides passing your authentication token (more info on auth token here: https://huggingface.co/docs/hub/security)
```python
from datasets import load_dataset
ds_en = load_dataset("mozilla-foundation/common_voice_8_0", "en", use_auth_token=True)
```
- Finally, you can filter only the data instances you are interested in (more info on `filter` here: https://huggingface.co/docs/datasets/process#select-and-filter):
```python
ds_england_en = ds_en.filter(lambda item: item["accent"] == "England English")
```
Feel free to reopen this issue if you need further assistance. |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | Hi @lemoner20, thanks for reporting.
I'm sorry but I cannot reproduce your problem:
```python
In [1]: from datasets import load_dataset, load_metric, Audio
...: raw_datasets = load_dataset("superb", "ks", split="train")
...: print(raw_datasets[0]["audio"])
Downloading builder script: 30.2kB [00:00, 13.0MB/s]
Downloading metadata: 38.0kB [00:00, 16.6MB/s]
Downloading and preparing dataset superb/ks (download: 1.45 GiB, generated: 9.64 MiB, post-processed: Unknown size, total: 1.46 GiB) to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9...
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.49G/1.49G [00:37<00:00, 39.3MB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 71.3M/71.3M [00:01<00:00, 36.1MB/s]
Downloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:41<00:00, 20.67s/it]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:28<00:00, 14.24s/it]
Dataset superb downloaded and prepared to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9. Subsequent calls will reuse this data.
{'path': '.../.cache/huggingface/datasets/downloads/extracted/8571921d3088b48f58f75b2e514815033e1ffbd06aa63fd4603691ac9f1c119f/_background_noise_/doing_the_dishes.wav', 'array': array([ 0. , 0. , 0. , ..., -0.00592041,
-0.00405884, -0.00253296], dtype=float32), 'sampling_rate': 16000}
```
Which version of `datasets` are you using? Could you please fill in the environment info requested in the bug report template? You can run the command `datasets-cli env` and copy-and-paste its output below
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | 178 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
```
Hi @lemoner20, thanks for reporting.
I'm sorry but I cannot reproduce your problem:
```python
In [1]: from datasets import load_dataset, load_metric, Audio
...: raw_datasets = load_dataset("superb", "ks", split="train")
...: print(raw_datasets[0]["audio"])
Downloading builder script: 30.2kB [00:00, 13.0MB/s]
Downloading metadata: 38.0kB [00:00, 16.6MB/s]
Downloading and preparing dataset superb/ks (download: 1.45 GiB, generated: 9.64 MiB, post-processed: Unknown size, total: 1.46 GiB) to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9...
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.49G/1.49G [00:37<00:00, 39.3MB/s]
Downloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 71.3M/71.3M [00:01<00:00, 36.1MB/s]
Downloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:41<00:00, 20.67s/it]
Extracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:28<00:00, 14.24s/it]
Dataset superb downloaded and prepared to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9. Subsequent calls will reuse this data.
{'path': '.../.cache/huggingface/datasets/downloads/extracted/8571921d3088b48f58f75b2e514815033e1ffbd06aa63fd4603691ac9f1c119f/_background_noise_/doing_the_dishes.wav', 'array': array([ 0. , 0. , 0. , ..., -0.00592041,
-0.00405884, -0.00253296], dtype=float32), 'sampling_rate': 16000}
```
Which version of `datasets` are you using? Could you please fill in the environment info requested in the bug report template? You can run the command `datasets-cli env` and copy-and-paste its output below
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | @albertvillanova Thanks for your reply. The environment info below
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-4.19.91-007.ali4000.alios7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyArrow version: 6.0.1 | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | 27 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
```
@albertvillanova Thanks for your reply. The environment info below
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-4.19.91-007.ali4000.alios7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.12
- PyArrow version: 6.0.1 |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | Thanks @lemoner20,
I cannot reproduce your issue in datasets version 1.18.3 either.
Maybe redownloading the data file may work if you had already cached this dataset previously. Could you please try passing "force_redownload"?
```python
raw_datasets = load_dataset("superb", "ks", split="train", download_mode="force_redownload") | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | 40 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
```
Thanks @lemoner20,
I cannot reproduce your issue in datasets version 1.18.3 either.
Maybe redownloading the data file may work if you had already cached this dataset previously. Could you please try passing "force_redownload"?
```python
raw_datasets = load_dataset("superb", "ks", split="train", download_mode="force_redownload") |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | @albertvillanova, you can actually reproduce the error if you reach the cell `common_voice_train[0]["path"]` of this [notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=_0kRndSvqaKk). Error gets solved after updating the versions of the libraries used in there. | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | 29 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
```
@albertvillanova, you can actually reproduce the error if you reach the cell `common_voice_train[0]["path"]` of this [notebook](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=_0kRndSvqaKk). Error gets solved after updating the versions of the libraries used in there. |
https://github.com/huggingface/datasets/issues/3851 | Load audio dataset error | @jvel07, thanks for reporting and finding a solution.
Maybe we could tell @patrickvonplaten about the version pinning issue in his notebook. | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | 21 | Load audio dataset error
## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
```
@jvel07, thanks for reporting and finding a solution.
Maybe we could tell @patrickvonplaten about the version pinning issue in his notebook. |