url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
51
51
id
int64
1.29B
1.57B
node_id
stringlengths
18
18
number
int64
4.59k
5.51k
title
stringlengths
10
165
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
int64
0
48
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
51
33.9k
βŒ€
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/5224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5224/comments
https://api.github.com/repos/huggingface/datasets/issues/5224/events
https://github.com/huggingface/datasets/issues/5224
1,443,640,867
I_kwDODunzps5WDDYj
5,224
Seems to freeze when loading audio dataset with wav files from local folder
{ "login": "uriii3", "id": 45894267, "node_id": "MDQ6VXNlcjQ1ODk0MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/45894267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/uriii3", "html_url": "https://github.com/uriii3", "followers_url": "https://api.github.com/users/uriii3/followers", "following_url": "https://api.github.com/users/uriii3/following{/other_user}", "gists_url": "https://api.github.com/users/uriii3/gists{/gist_id}", "starred_url": "https://api.github.com/users/uriii3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/uriii3/subscriptions", "organizations_url": "https://api.github.com/users/uriii3/orgs", "repos_url": "https://api.github.com/users/uriii3/repos", "events_url": "https://api.github.com/users/uriii3/events{/privacy}", "received_events_url": "https://api.github.com/users/uriii3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-11-10T10:29:31"
"2022-11-22T11:24:19"
"2022-11-22T11:24:19"
NONE
null
### Describe the bug I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder. I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from terminal, seems to work but then freezes with no apparent reason. The metadata.csv file contains a few columns but the important ones, `file_name` with the filename and `transcription` with the transcription are okay. The audios are `.wav` files, I don't know if that might be the problem (I will proceed to try to change them all to `.mp3` and try again). ### Steps to reproduce the bug The code I'm using: ```python from datasets import load_dataset dataset = load_dataset("audiofolder", data_dir="../archive/Dataset") dataset[0]["audio"] ``` The output I obtain: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 311135.43it/s] Using custom data configuration default-38d4546ffd010f3e Downloading and preparing dataset audiofolder/default to /Users/mine/.cache/huggingface/datasets/audiofolder/default-38d4546ffd010f3e/0.0.0/6cbdd16f8688354c63b4e2a36e1585d05de285023ee6443ffd71c4182055c0fc... Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 166467.72it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 187772.74it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 59623.71it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 138090.55it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 106065.64it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 56036.38it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 74004.24it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 162343.45it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 101881.23it/s] Using custom data configuration default-38d4546ffd010f3e Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 60145.67it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 80890.02it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 54036.67it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 95851.09it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 155897.00it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 137656.96it/s] Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 439/439 [00:00<00:00, 131230.81it/s] Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e Using custom data configuration default-38d4546ffd010f3e ``` And then here it just freezes and nothing more happens. ### Expected behavior Load the dataset. ### Environment info Datasets version: datasets 2.6.1 pypi_0 pypi
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5224/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5222/comments
https://api.github.com/repos/huggingface/datasets/issues/5222/events
https://github.com/huggingface/datasets/issues/5222
1,442,412,507
I_kwDODunzps5V-Xfb
5,222
HuggingFace website is incorrectly reporting that my datasets are pickled
{ "login": "ProGamerGov", "id": 10626398, "node_id": "MDQ6VXNlcjEwNjI2Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/10626398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ProGamerGov", "html_url": "https://github.com/ProGamerGov", "followers_url": "https://api.github.com/users/ProGamerGov/followers", "following_url": "https://api.github.com/users/ProGamerGov/following{/other_user}", "gists_url": "https://api.github.com/users/ProGamerGov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ProGamerGov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ProGamerGov/subscriptions", "organizations_url": "https://api.github.com/users/ProGamerGov/orgs", "repos_url": "https://api.github.com/users/ProGamerGov/repos", "events_url": "https://api.github.com/users/ProGamerGov/events{/privacy}", "received_events_url": "https://api.github.com/users/ProGamerGov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2022-11-09T16:41:16"
"2022-11-09T18:10:46"
"2022-11-09T18:06:57"
NONE
null
### Describe the bug HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images. Hopefully this is the right location to report this bug. ### Steps to reproduce the bug Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images ### Expected behavior They should not be reported as being pickled. ### Environment info N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5222/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5221/comments
https://api.github.com/repos/huggingface/datasets/issues/5221/events
https://github.com/huggingface/datasets/issues/5221
1,442,309,094
I_kwDODunzps5V9-Pm
5,221
Cannot push
{ "login": "bayartsogt-ya", "id": 43239645, "node_id": "MDQ6VXNlcjQzMjM5NjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/43239645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bayartsogt-ya", "html_url": "https://github.com/bayartsogt-ya", "followers_url": "https://api.github.com/users/bayartsogt-ya/followers", "following_url": "https://api.github.com/users/bayartsogt-ya/following{/other_user}", "gists_url": "https://api.github.com/users/bayartsogt-ya/gists{/gist_id}", "starred_url": "https://api.github.com/users/bayartsogt-ya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bayartsogt-ya/subscriptions", "organizations_url": "https://api.github.com/users/bayartsogt-ya/orgs", "repos_url": "https://api.github.com/users/bayartsogt-ya/repos", "events_url": "https://api.github.com/users/bayartsogt-ya/events{/privacy}", "received_events_url": "https://api.github.com/users/bayartsogt-ya/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-11-09T15:32:05"
"2022-11-10T18:11:21"
"2022-11-10T18:11:11"
NONE
null
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl (venv) ╭─laptop@laptop~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ git add ulaanbal_v0.tar.gz && git commit -m 'large version' (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 β€Ήmain●› ╰─$ git push EOFoading LFS objects: 0% (0/1), 0 B | 0 B/s Uploading LFS objects: 0% (0/1), 0 B | 0 B/s, done. error: failed to push some refs to 'https://huggingface.co/datasets/bayartsogt/ulaanbal_v0' ``` I have already tried pushing a small version of this and it was working fine. So my guess it is probably because of the big file. Following I run before the commit: ``` ╰─$ git lfs install ╰─$ huggingface-cli lfs-enable-largefiles . ``` ### Steps to reproduce the bug Create a private dataset on huggingface and push 12G tar.gz file ### Expected behavior To be pushed with no issue ### Environment info - `datasets` version: 2.6.1 - Platform: Darwin-21.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 10.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5221/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5220/comments
https://api.github.com/repos/huggingface/datasets/issues/5220/events
https://github.com/huggingface/datasets/issues/5220
1,441,664,377
I_kwDODunzps5V7g15
5,220
Implicit type conversion of lists in to_pandas
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-11-09T08:40:18"
"2022-11-10T16:12:26"
"2022-11-10T16:12:26"
CONTRIBUTOR
null
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original type ### Environment info datasets 2.6.1 python 3.8.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5220/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5219/comments
https://api.github.com/repos/huggingface/datasets/issues/5219/events
https://github.com/huggingface/datasets/issues/5219
1,441,255,910
I_kwDODunzps5V59Hm
5,219
Delta Tables usage using Datasets Library
{ "login": "reichenbch", "id": 23002137, "node_id": "MDQ6VXNlcjIzMDAyMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/23002137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reichenbch", "html_url": "https://github.com/reichenbch", "followers_url": "https://api.github.com/users/reichenbch/followers", "following_url": "https://api.github.com/users/reichenbch/following{/other_user}", "gists_url": "https://api.github.com/users/reichenbch/gists{/gist_id}", "starred_url": "https://api.github.com/users/reichenbch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reichenbch/subscriptions", "organizations_url": "https://api.github.com/users/reichenbch/orgs", "repos_url": "https://api.github.com/users/reichenbch/repos", "events_url": "https://api.github.com/users/reichenbch/events{/privacy}", "received_events_url": "https://api.github.com/users/reichenbch/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
3
"2022-11-09T02:43:56"
"2022-11-18T10:22:25"
null
NONE
null
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering. This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose. ### Your contribution Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns. I have basic idea about Delta Live Tables, would brush it easily for this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5219/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5218/comments
https://api.github.com/repos/huggingface/datasets/issues/5218/events
https://github.com/huggingface/datasets/issues/5218
1,441,254,194
I_kwDODunzps5V58sy
5,218
Delta Tables usage using Datasets Library
{ "login": "rcv-koo", "id": 103188035, "node_id": "U_kgDOBiaGQw", "avatar_url": "https://avatars.githubusercontent.com/u/103188035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcv-koo", "html_url": "https://github.com/rcv-koo", "followers_url": "https://api.github.com/users/rcv-koo/followers", "following_url": "https://api.github.com/users/rcv-koo/following{/other_user}", "gists_url": "https://api.github.com/users/rcv-koo/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcv-koo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcv-koo/subscriptions", "organizations_url": "https://api.github.com/users/rcv-koo/orgs", "repos_url": "https://api.github.com/users/rcv-koo/repos", "events_url": "https://api.github.com/users/rcv-koo/events{/privacy}", "received_events_url": "https://api.github.com/users/rcv-koo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
0
"2022-11-09T02:42:18"
"2022-11-09T02:42:36"
"2022-11-09T02:42:36"
NONE
null
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library could work with Delta Tables (with delta format) as it has different features such as time travelling, layout optimization, query performance, aids in Data Engineering. This will help and enhance Datasets library from Machine Learning utility to Data Engineering utilities and expand horizons thereafter. I am totally using Datasets library in all my usecases and as my role expands so does the work, compatibility with Datasets library is something I don't want to lose. ### Your contribution Would love to work on this feature, even if this has to picked up from scratch, including design paradigms and patterns. I have basic idea about Delta Live Tables, would brush it easily for this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5218/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5216/comments
https://api.github.com/repos/huggingface/datasets/issues/5216/events
https://github.com/huggingface/datasets/issues/5216
1,441,041,947
I_kwDODunzps5V5I4b
5,216
save_elasticsearch_index
{ "login": "amobash2", "id": 12739718, "node_id": "MDQ6VXNlcjEyNzM5NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/12739718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amobash2", "html_url": "https://github.com/amobash2", "followers_url": "https://api.github.com/users/amobash2/followers", "following_url": "https://api.github.com/users/amobash2/following{/other_user}", "gists_url": "https://api.github.com/users/amobash2/gists{/gist_id}", "starred_url": "https://api.github.com/users/amobash2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amobash2/subscriptions", "organizations_url": "https://api.github.com/users/amobash2/orgs", "repos_url": "https://api.github.com/users/amobash2/repos", "events_url": "https://api.github.com/users/amobash2/events{/privacy}", "received_events_url": "https://api.github.com/users/amobash2/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2022-11-08T23:06:52"
"2022-11-09T13:16:45"
null
NONE
null
Hi, I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5216/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5209/comments
https://api.github.com/repos/huggingface/datasets/issues/5209/events
https://github.com/huggingface/datasets/issues/5209
1,438,367,678
I_kwDODunzps5Vu7--
5,209
Implement ability to define splits in metadata section of dataset card
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
8
"2022-11-07T13:27:16"
"2022-12-21T13:22:29"
null
CONTRIBUTOR
null
### Feature request If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits instead of loading through `data_files`) e.g GLUE has various splits on viewer but it’s too overkill to ask people to implement loading script, so it would be better to let them define these in the README file instead. Also pinging @polinaeterna @lhoestq @adrinjalali
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5209/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5209/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5207/comments
https://api.github.com/repos/huggingface/datasets/issues/5207/events
https://github.com/huggingface/datasets/issues/5207
1,437,858,506
I_kwDODunzps5Vs_rK
5,207
Connection error of the HuggingFace's dataset Hub due to SSLError with proxy
{ "login": "leemgs", "id": 82404, "node_id": "MDQ6VXNlcjgyNDA0", "avatar_url": "https://avatars.githubusercontent.com/u/82404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leemgs", "html_url": "https://github.com/leemgs", "followers_url": "https://api.github.com/users/leemgs/followers", "following_url": "https://api.github.com/users/leemgs/following{/other_user}", "gists_url": "https://api.github.com/users/leemgs/gists{/gist_id}", "starred_url": "https://api.github.com/users/leemgs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leemgs/subscriptions", "organizations_url": "https://api.github.com/users/leemgs/orgs", "repos_url": "https://api.github.com/users/leemgs/repos", "events_url": "https://api.github.com/users/leemgs/events{/privacy}", "received_events_url": "https://api.github.com/users/leemgs/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
"2022-11-07T06:56:23"
"2022-11-12T15:31:58"
null
NONE
null
### Describe the bug It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office. Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy), I'm getting the SSLError issue. What should I do to download the datanet stored in HuggingFace normally? I welcome any comments. I think those comments will be helpful to me. * Dataset address - https://huggingface.co/datasets/moyix/debian_csrc/viewer/moyix--debian_csrc * Log message ``` ............ OMISSION .............. Traceback (most recent call last): File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 587, in <module> main() File "/data/home/geunsik-lim/qtlab/./transformers/examples/pytorch/language-modeling/run_clm.py", line 278, in main raw_datasets = load_dataset( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset builder_instance = load_dataset_builder( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory raise e1 from None File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) [2022-11-07 15:23:38,476] [INFO] [launch.py:318:sigkill_handler] Killing subprocess 6760 [2022-11-07 15:23:38,476] [ERROR] [launch.py:324:sigkill_handler] ['/home/geunsik-lim/anaconda3/envs/deepspeed/bin/python', '-u', './transformers/examples/pytorch/language-modeling/run_clm.py', '--local_rank=0', '--model_name_or_path=Salesforce/codegen-350M-multi', '--per_device_train_batch_size=1', '--learning_rate', '2e-5', '--num_train_epochs', '1', '--output_dir=./codegen-350M-finetuned', '--overwrite_output_dir', '--dataset_name', 'moyix/debian_csrc', '--cache_dir', '/data/home/geunsik-lim/.cache', '--tokenizer_name', 'Salesforce/codegen-350M-multi', '--block_size', '2048', '--gradient_accumulation_steps', '32', '--do_train', '--fp16', '--deepspeed', 'ds_config_zero2.json'] exits with return code = 1 real 0m7.742s user 0m4.930s ``` ### Steps to reproduce the bug Steps to reproduce this behavior. ``` (deepspeed) geunsik-lim@ai02:~/qtlab$ ./test_debian_csrc_dataset.py Traceback (most recent call last): File "/data/home/geunsik-lim/qtlab/./test_debian_csrc_dataset.py", line 6, in <module> dataset = load_dataset("moyix/debian_csrc") File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1719, in load_dataset builder_instance = load_dataset_builder( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1497, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1222, in dataset_module_factory raise e1 from None File "/home/geunsik-lim/anaconda3/envs/deepspeed/lib/python3.10/site-packages/datasets/load.py", line 1179, in dataset_module_factory raise ConnectionError(f"Couldn't reach '{path}' on the Hub ({type(e).__name__})") ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ (deepspeed) geunsik-lim@ai02:~/qtlab$ cat ./test_debian_csrc_dataset.py #!/usr/bin/env python from datasets import load_dataset dataset = load_dataset("moyix/debian_csrc") ``` 1. Adde proxy address of a company in /etc/profile 2. Download dataset with load_dataset() function of datasets package that is provided by HuggingFace. 3. In this case, the address would be "moyix--debian_csrc". 4. I get the "`ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError`)" error message. ### Expected behavior * error message: ConnectionError: Couldn't reach 'moyix/debian_csrc' on the Hub (SSLError) ### Environment info * software version information: ``` (deepspeed) geunsik-lim@ai02:~$ (deepspeed) geunsik-lim@ai02:~$ conda list -f pytorch # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel pytorch 1.13.0 py3.10_cuda11.7_cudnn8.5.0_0 pytorch (deepspeed) geunsik-lim@ai02:~$ conda list -f python # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel python 3.10.6 haa1d7c7_1 (deepspeed) geunsik-lim@ai02:~$ conda list -f datasets # packages in environment at /home/geunsik-lim/anaconda3/envs/deepspeed: # # Name Version Build Channel datasets 2.6.1 py_0 huggingface (deepspeed) geunsik-lim@ai02:~$ uname -a Linux ai02 5.4.0-131-generic #147-Ubuntu SMP Fri Oct 14 17:07:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux (deepspeed) geunsik-lim@ai02:~$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=20.04 DISTRIB_CODENAME=focal DISTRIB_DESCRIPTION="Ubuntu 20.04.5 LTS" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5207/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5206/comments
https://api.github.com/repos/huggingface/datasets/issues/5206/events
https://github.com/huggingface/datasets/issues/5206
1,437,223,894
I_kwDODunzps5VqkvW
5,206
Use logging instead of printing to console
{ "login": "bilelomrani1", "id": 16692099, "node_id": "MDQ6VXNlcjE2NjkyMDk5", "avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilelomrani1", "html_url": "https://github.com/bilelomrani1", "followers_url": "https://api.github.com/users/bilelomrani1/followers", "following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}", "gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions", "organizations_url": "https://api.github.com/users/bilelomrani1/orgs", "repos_url": "https://api.github.com/users/bilelomrani1/repos", "events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}", "received_events_url": "https://api.github.com/users/bilelomrani1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-11-05T23:48:02"
"2022-11-06T00:06:00"
"2022-11-06T00:05:59"
NONE
null
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L830)) generated by the `DatasetBuilder` are printed to the console instead of passed to `datasets` logger. ### Steps to reproduce the bug ```python >> import datasets >> datasets.load_dataset("some-dataset") Downloading and preparing dataset csv/data to <path>... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 7729.06it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 527.23it/s] Dataset csv downloaded and prepared to <path>. Subsequent calls will reuse this data. ``` ### Expected behavior The logs should not be printed to the console directly but passed to the logger so that the user can redirect them wherever he wants. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-x86_64-i386-64bit - Python version: 3.9.15 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5206/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5204/comments
https://api.github.com/repos/huggingface/datasets/issues/5204/events
https://github.com/huggingface/datasets/issues/5204
1,437,221,259
I_kwDODunzps5VqkGL
5,204
`push_to_hub` not propagating `token` through `DownloadConfig`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-11-05T23:32:20"
"2022-11-08T10:12:09"
"2022-11-08T10:12:08"
CONTRIBUTOR
null
### Describe the bug When trying to upload a new πŸ€— Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before. But when trying to run `Dataset.push_to_hub` again over the same dataset, instead of updating it, it throws a `ConnectionError` when trying to retrieve the `README.md` that may contain some metadata about the dataset, so as to also update it, but since the `token` is not propagated, the `DownloadConfig` provided to the `datasets.utils.file_utils.get_from_cache` function doesn't contain the `use_auth_token` value set to `token`, it's just using the default one which is None/False. So on, when uploading a dataset via Python with `push_to_hub` with the `token` as a parameter with the HuggingFace API Token as value, it can just be uploaded when the dataset is new, otherwise it fails with to `ConnectionError` due to the `token` not being propagated as `use_auth_token`. ### Steps to reproduce the bug Let's create a new dataset in our HF account via Python as: ```python from datasets import Dataset data = {"a": [1, 2, 3], "b": [4, 5, 6]} ds = Dataset.from_dict(data) ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>) ``` When we create the `Dataset` for the first time it works and there are no issues, but when trying to actually upload a new version of the same dataset (same name under the same username), we encounter the following issue: ```python from datasets import Dataset data = {"a": [1, 2, 3], "b": [4, 5, 6]} ds = Dataset.from_dict(data) ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>) >>> ConnectionError: Couldn't reach https://huggingface.co/datasets/alvarobartt/demo/resolve/main/README.md (ConnectionError('Unauthorized for URL https://huggingface.co/datasets/<HF_USERNAME>/<HF_DATASET>/resolve/main/README.md. Please use the parameter `use_auth_token=True` after logging in with `huggingface-cli login`')) ``` ### Expected behavior Ideally, the `token` parameter provided to `push_to_hub` should be propagated and used to download the `README.md` when trying to update a `Dataset`, instead of throwing that exception, so that the authentication can be done directly through code without running `huggingface-cli login`as mentioned at https://huggingface.co/docs/datasets/upload_dataset#upload-with-python. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-13.0-arm64-arm-64bit - Python version: 3.10.8 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5204/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5202/comments
https://api.github.com/repos/huggingface/datasets/issues/5202/events
https://github.com/huggingface/datasets/issues/5202
1,435,886,090
I_kwDODunzps5VleIK
5,202
CI fails after bulk edit of canonical datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-11-04T10:51:20"
"2022-11-04T10:51:37"
null
MEMBER
null
``` ______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', config_name = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, config_name, expected_splits", [ ("squad", "plain_text", ["train", "validation"]), ("dalle-mini/wit", "dalle-mini--wit", ["train"]), ("paws", "labeled_final", ["train", "test", "validation"]), ], ) def test_get_dataset_config_info(path, config_name, expected_splits): info = get_dataset_config_info(path, config_name=config_name) assert info.config_name == config_name > assert list(info.splits.keys()) == expected_splits E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] tests/test_inspect.py:45: AssertionError _ test_get_dataset_info[paws-expected_configs2-expected_splits_in_first_config2] _ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws' expected_configs = ['labeled_final', 'labeled_swap', 'unlabeled_final'] expected_splits_in_first_config = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, expected_configs, expected_splits_in_first_config", [ ("squad", ["plain_text"], ["train", "validation"]), ("dalle-mini/wit", ["dalle-mini--wit"], ["train"]), ("paws", ["labeled_final", "labeled_swap", "unlabeled_final"], ["train", "test", "validation"]), ], ) def test_get_dataset_info(path, expected_configs, expected_splits_in_first_config): infos = get_dataset_infos(path) assert list(infos.keys()) == expected_configs expected_config = expected_configs[0] assert expected_config in infos info = infos[expected_config] assert info.config_name == expected_config > assert list(info.splits.keys()) == expected_splits_in_first_config E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] tests/test_inspect.py:90: AssertionError ______ test_get_dataset_split_names[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', expected_config = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, expected_config, expected_splits", [ ("squad", "plain_text", ["train", "validation"]), ("dalle-mini/wit", "dalle-mini--wit", ["train"]), ("paws", "labeled_final", ["train", "test", "validation"]), ], ) def test_get_dataset_split_names(path, expected_config, expected_splits): infos = get_dataset_infos(path) assert expected_config in infos info = infos[expected_config] assert info.config_name == expected_config > assert list(info.splits.keys()) == expected_splits E AssertionError: assert ['test', 'tra... 'validation'] == ['train', 'te... 'validation'] E At index 0 diff: 'test' != 'train' E Full diff: E - ['train', 'test', 'validation'] E + ['test', 'train', 'validation'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5202/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5200/comments
https://api.github.com/repos/huggingface/datasets/issues/5200/events
https://github.com/huggingface/datasets/issues/5200
1,435,831,559
I_kwDODunzps5VlQ0H
5,200
Some links to canonical datasets in the docs are outdated
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
1
"2022-11-04T10:06:21"
"2022-11-07T18:40:20"
"2022-11-07T18:40:20"
CONTRIBUTOR
null
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5200/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5193/comments
https://api.github.com/repos/huggingface/datasets/issues/5193/events
https://github.com/huggingface/datasets/issues/5193
1,433,883,780
I_kwDODunzps5Vd1SE
5,193
"One or several metadata. were found, but not in the same directory or in a parent directory"
{ "login": "lambda-science", "id": 20109584, "node_id": "MDQ6VXNlcjIwMTA5NTg0", "avatar_url": "https://avatars.githubusercontent.com/u/20109584?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lambda-science", "html_url": "https://github.com/lambda-science", "followers_url": "https://api.github.com/users/lambda-science/followers", "following_url": "https://api.github.com/users/lambda-science/following{/other_user}", "gists_url": "https://api.github.com/users/lambda-science/gists{/gist_id}", "starred_url": "https://api.github.com/users/lambda-science/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lambda-science/subscriptions", "organizations_url": "https://api.github.com/users/lambda-science/orgs", "repos_url": "https://api.github.com/users/lambda-science/repos", "events_url": "https://api.github.com/users/lambda-science/events{/privacy}", "received_events_url": "https://api.github.com/users/lambda-science/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
"2022-11-02T22:46:25"
"2022-11-03T13:39:16"
"2022-11-03T13:35:44"
NONE
null
### Describe the bug When loading my own dataset, on loading it I get an error. Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data And the error after loading with: ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ```python Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.34k/3.34k [00:00<00:00, 4.45MB/s] Using custom data configuration SDH_16k-53e7301a92ab0025 Downloading and preparing dataset None/SDH_16k to /home/corentin/.cache/huggingface/datasets/corentinm7___imagefolder/SDH_16k-53e7301a92ab0025/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.28M/3.28M [00:00<00:00, 4.31MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:01<00:00, 1.75s/it] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.13G/1.13G [00:15<00:00, 74.3MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:16<00:00, 16.09s/it] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:13<00:00, 13.16s/it] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/load.py", line 1742, in load_dataset builder_instance.download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 814, in download_and_prepare self._download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1423, in _download_and_prepare super()._download_and_prepare( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 905, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1374, in _prepare_split for key, record in logging.tqdm( File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/corentin/code-project/hugging_face_play/.venv/lib/python3.10/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 394, in _generate_examples raise ValueError( ValueError: One or several metadata. were found, but not in the same directory or in a parent directory of /home/corentin/.cache/huggingface/datasets/downloads/extracted/60c4aa8d4da3065bb3d310de4373dffd73bd4dc331aedcb4ee867febe4fdb7cd/validation/sick/2_CG_SDH_TAM_Bin1cKO_ko_pla_4_1640.tif. ``` However the test command is working fine. ```datasets-cli test hugging_face_play/ds_test/SDH_16k.py --save_info --all_configs --force_redownload``` ``` Using custom data configuration SDH_16k Testing builder 'SDH_16k' (1/1) Downloading and preparing dataset sdh_16k/SDH_16k to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.13G/1.13G [00:14<00:00, 76.5MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:15<00:00, 15.66s/it] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.28M/3.28M [00:02<00:00, 1.44MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:03<00:00, 3.21s/it] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 11586.48it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:13<00:00, 13.42s/it] Dataset sdh_16k downloaded and prepared to /home/corentin/.cache/huggingface/datasets/sdh_16k/SDH_16k/1.0.0/21b584239a638aeeda33cba1ac2ca4869d48e4b4f20fb22274d5a5ddc487659d. Subsequent calls will reuse this data. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 605.27it/s] Dataset card saved at hugging_face_play/ds_test/README.md Test successful. ``` ### Steps to reproduce the bug Simply run on python ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ### Expected behavior As the test command worked, this error should not appear ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.6 - PyArrow version: 10.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5193/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5190/comments
https://api.github.com/repos/huggingface/datasets/issues/5190/events
https://github.com/huggingface/datasets/issues/5190
1,433,014,626
I_kwDODunzps5VahFi
5,190
`path` is `None` when downloading a custom audio dataset from the Hub
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-11-02T11:51:25"
"2022-11-02T12:55:02"
"2022-11-02T12:55:02"
MEMBER
null
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None` Here's an example: ```python from datasets import load_dataset ds = load_dataset("lewtun/audio-test-push") ds["train"][0] # { # "audio": { # "path": None, <-- Is this expected? # "array": array( # [ # 3.97140226e-07, # 7.30310290e-07, # 7.56406735e-07, # ..., # -1.19636677e-01, # -1.16811886e-01, # -1.12441722e-01, # ] # ), # "sampling_rate": 44100, # }, # "song_id": 0, # "genre_id": 0, # "genre": "Electronic", # } ``` Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :) ### Steps to reproduce the bug 1. Create an audio dataset with the `audiofolder` feature 2. Push the dataset to the Hub with `push_to_hub()` 3. Download the Hub dataset and inspect the `audio.path` feature ### Expected behavior `audio.path` points to the file associated with the audio data ### Environment info - `datasets` version: 2.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5190/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5189/comments
https://api.github.com/repos/huggingface/datasets/issues/5189/events
https://github.com/huggingface/datasets/issues/5189
1,432,769,143
I_kwDODunzps5VZlJ3
5,189
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
33
"2022-11-02T09:15:02"
"2022-12-06T12:13:17"
null
CONTRIBUTOR
null
### Feature request Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark) ```python from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True) print(next(iter(dataset["train"]))) ``` `datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors. It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default. ```diff from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True) -print(next(iter(dataset["train"]))) +print(next(iter(dataset))) ``` ### Motivation I explained it above πŸ˜… ### Your contribution I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5189/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5186/comments
https://api.github.com/repos/huggingface/datasets/issues/5186/events
https://github.com/huggingface/datasets/issues/5186
1,432,045,011
I_kwDODunzps5VW0XT
5,186
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-11-01T20:25:51"
"2022-11-15T18:24:39"
"2022-11-15T18:24:39"
CONTRIBUTOR
null
### Describe the bug When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed. ### Steps to reproduce the bug Make a new sqlite db with `sqlite3` and `pandas` from a remote [URL](https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv). ```python import sqlite3 import pandas as pd from datasets import Dataset conn = sqlite3.connect('us_covid_data.db') df = pd.read_csv('https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv') df.to_sql('states', conn, if_exists='replace') ``` Then if you try to query this DB like this: ```python ds = Dataset.from_sql('''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db") ``` You run into the error I described above: ```ValueError: Please pass `features` or at least one example when writing data``` However, if you try to pass features, as the error suggests, then you get an error that tells you the underlying problem... ```python from datasets import Dataset, Features, Value features = Features({ 'date': Value('date32'), 'label': Value('string'), 'fips': Value('int32'), 'cases': Value('int32'), 'deaths': Value('int32') }) ds = Dataset.from_sql( '''SELECT * from states WHERE state=="New York";''', "sqlite:///us_covid_data.db", features=features ) ``` Which results in the actual underlying error: `ImportError: Using URI string without sqlalchemy installed.` ### Expected behavior Instead of `ValueError` about needing to pass features, we should provide the actual underlying error about not having SQLAlchemy installed when it isn't found in the environment. ### Environment info - `datasets` version: 2.6.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 10.0.0 - Pandas version: 1.2.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5186/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5185/comments
https://api.github.com/repos/huggingface/datasets/issues/5185/events
https://github.com/huggingface/datasets/issues/5185
1,432,021,611
I_kwDODunzps5VWupr
5,185
Allow passing a subset of output features to Dataset.map
{ "login": "sanderland", "id": 48946947, "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanderland", "html_url": "https://github.com/sanderland", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "organizations_url": "https://api.github.com/users/sanderland/orgs", "repos_url": "https://api.github.com/users/sanderland/repos", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "received_events_url": "https://api.github.com/users/sanderland/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2022-11-01T20:07:20"
"2022-11-01T20:07:34"
null
CONTRIBUTOR
null
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, sometimes you want to just pass some of the output types, particularly when the first of these modes makes an incorrect type. This currently crashes. ### Motivation To give a little background: this problem appears in converting labels to ids, where the labels happen to be floats rather than strings Consider the following use of map to convert from float to int ```python data = Dataset.from_dict({'y':[1.0,2.0,3.0]}) mapped = data.map(lambda r: {'y': int(r['y'])}) mapped['y'] # is floats, not ints ``` The result is a float again, since after the mapping operation it forces the old datatypes back on the data. Passing `features=Features({"y": Value(dtype="int64")})` to map works in principle, but then extending it a little to e.g. ```python def format_data(r): return {**tokenizer(r["text"]), "y": int(r["y"])} data = Dataset.from_dict({"y": [1.0, 2.0, 3.0], "text": ["one", "two", "three"]}) mapped = data.map( format_data, features=Features({'y': Value(dtype="int64")}), remove_columns=["text"], ) ``` Results in a crash in dataset internals, as it expects either all or no output features to be specified. Of course one can pass a full feature specification, but this becomes tokenizer specific and very awkward. ### Your contribution I've looked at `write_batch` and particularly `col_type = features[col] if features else None`, but checking for `col in features` here makes it fail elsewhere, but the structure makes it hard to understand how and why. I do not think I would have the time myself to get to the bottom of this anytime soon.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5185/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5183/comments
https://api.github.com/repos/huggingface/datasets/issues/5183/events
https://github.com/huggingface/datasets/issues/5183
1,431,418,066
I_kwDODunzps5VUbTS
5,183
Loading an external dataset in a format similar to conll2003
{ "login": "Taghreed7878", "id": 112555442, "node_id": "U_kgDOBrV1sg", "avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Taghreed7878", "html_url": "https://github.com/Taghreed7878", "followers_url": "https://api.github.com/users/Taghreed7878/followers", "following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}", "gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}", "starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions", "organizations_url": "https://api.github.com/users/Taghreed7878/orgs", "repos_url": "https://api.github.com/users/Taghreed7878/repos", "events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}", "received_events_url": "https://api.github.com/users/Taghreed7878/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-11-01T13:18:29"
"2022-11-02T11:57:50"
"2022-11-02T11:57:50"
NONE
null
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script: features = datasets.Features( {"tokens": datasets.Sequence(datasets.Value("string")), "ner_tags": datasets.Sequence( datasets.features.ClassLabel( names=["B-PER", .... etc.]))} ) from datasets import Dataset INPUT_COLUMNS = "tokens ner_tags".split(" ") def read_conll(file): #all_labels = [] example = {col: [] for col in INPUT_COLUMNS} idx = 0 with open(file) as f: for line in f: if line: if line.startswith("-DOCSTART-") and example["tokens"] != []: print(idx, example) yield idx, example idx += 1 example = {col: [] for col in INPUT_COLUMNS} elif line == "\n" or (line.startswith("-DOCSTART-") and example["tokens"] == []): continue else: row_cols = line.split(" ") for i, col in enumerate(example): example[col] = row_cols[i].rstrip() dset = Dataset.from_generator(read_conll, gen_kwargs={"file": "/content/new_train.txt"}, features = features) The following error happened: [/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <genexpr>(.0) 285 for key in unique_values(itertools.chain(*dicts)): # set merge all keys 286 # Will raise KeyError if the dict don't have the same keys --> 287 yield key, tuple(d[key] for d in dicts) 288 TypeError: tuple indices must be integers or slices, not str What does this mean and what should I modify?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5183/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5182/comments
https://api.github.com/repos/huggingface/datasets/issues/5182/events
https://github.com/huggingface/datasets/issues/5182
1,431,029,547
I_kwDODunzps5VS8cr
5,182
Add notebook / other resource links to the task-specific data loading guides
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-11-01T07:57:26"
"2022-11-03T01:49:57"
"2022-11-03T01:49:57"
MEMBER
null
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model? For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classification.ipynb. Applies to https://huggingface.co/docs/datasets/object_detection as well. Cc: @osanseviero @nateraw
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5182/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5181/comments
https://api.github.com/repos/huggingface/datasets/issues/5181/events
https://github.com/huggingface/datasets/issues/5181
1,431,027,102
I_kwDODunzps5VS72e
5,181
Add a guide for semantic segmentation
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-11-01T07:54:50"
"2022-11-04T18:23:36"
"2022-11-04T18:23:36"
MEMBER
null
Currently, we have these guides for object detection and image classification: * https://huggingface.co/docs/datasets/object_detection * https://huggingface.co/docs/datasets/image_classification I am proposing adding a similar guide for semantic segmentation. I am happy to contribute a PR for it. Cc: @osanseviero @nateraw
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5181/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5180/comments
https://api.github.com/repos/huggingface/datasets/issues/5180/events
https://github.com/huggingface/datasets/issues/5180
1,431,012,438
I_kwDODunzps5VS4RW
5,180
An example or recommendations for creating large image datasets?
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2022-11-01T07:38:38"
"2022-11-02T10:17:11"
null
MEMBER
null
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do? As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset). Cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5180/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5179/comments
https://api.github.com/repos/huggingface/datasets/issues/5179/events
https://github.com/huggingface/datasets/issues/5179
1,430,826,100
I_kwDODunzps5VSKx0
5,179
`map()` fails midway due to format incompatibility
{ "login": "sayakpaul", "id": 22957388, "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sayakpaul", "html_url": "https://github.com/sayakpaul", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "repos_url": "https://api.github.com/users/sayakpaul/repos", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
"2022-11-01T03:57:59"
"2022-11-08T11:35:26"
"2022-11-08T11:35:26"
MEMBER
null
### Describe the bug I am using the `emotion` dataset from Hub for sequence classification. After training the model, I am using it to generate predictions for all the entries present in the `validation` split of the dataset. ```py def get_test_accuracy(model): def fn(batch): inputs = {k:v.to(device) for k,v in batch.items() if k in tokenizer.model_input_names} with torch.no_grad(): output = model(**inputs) pred_label = torch.argmax(output.logits, axis=-1) return {"predicted_label": pred_label.cpu().numpy()} return fn ``` This is how the `get_test_accuracy()` is being used: ```py emotions = load_dataset("emotion") def tokenize(batch): return tokenizer(batch["text"], padding=True, truncation=True) emotions_encoded = emotions.map(tokenize, batched=True) emotions_encoded.set_format("torch", columns=["input_ids", "attention_mask", "label"]) new_dataset = emotions_encoded["validation"].map( accuracy_fn, batched=True, batch_size=128 ) ``` Complete code is available in the Colab Notebook provided below. The `map()` process fails midway giving: ```shell AttributeError Traceback (most recent call last) <ipython-input-8-ad24ac288eb4> in <module> 2 3 new_dataset = emotions_encoded["validation"].map( ----> 4 accuracy_fn, batched=True, batch_size=128 5 ) 7 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2588 new_fingerprint=new_fingerprint, 2589 disable_tqdm=disable_tqdm, -> 2590 desc=desc, 2591 ) 2592 else: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 582 self: "Dataset" = kwargs.pop("self") 583 # apply actual function --> 584 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 585 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 586 for dataset in datasets: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 549 } 550 # apply actual function --> 551 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 552 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 553 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 478 # Call actual function 479 --> 480 out = func(self, *args, **kwargs) 481 482 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2970 indices, 2971 check_same_num_examples=len(input_dataset.list_indexes()) > 0, -> 2972 offset=offset, 2973 ) 2974 except NumExamplesMismatchError: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 2850 if with_rank: 2851 additional_args += (rank,) -> 2852 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 2853 if update_data is None: 2854 # Check if the function returns updated examples <ipython-input-6-4e0d280426f6> in fn(batch) 1 def get_test_accuracy(model): 2 def fn(batch): ----> 3 inputs = {k:v.to(device) for k,v in batch.items() 4 if k in tokenizer.model_input_names} 5 with torch.no_grad(): <ipython-input-6-4e0d280426f6> in <dictcomp>(.0) 2 def fn(batch): 3 inputs = {k:v.to(device) for k,v in batch.items() ----> 4 if k in tokenizer.model_input_names} 5 with torch.no_grad(): 6 output = model(**inputs) AttributeError: 'list' object has no attribute 'to' ``` As you'd notice in the notebook, the process fails _midway_ and not at the beginning. Is this expected? ### Steps to reproduce the bug Colab Notebook: https://colab.research.google.com/gist/sayakpaul/d1570d537faf39040d02d77b1ed7de07/scratchpad.ipynb ### Expected behavior The mapping process should complete as is. If you switch the `split` to `test` it works as expected. ### Environment info Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5179/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5178/comments
https://api.github.com/repos/huggingface/datasets/issues/5178/events
https://github.com/huggingface/datasets/issues/5178
1,430,800,810
I_kwDODunzps5VSEmq
5,178
Unable to download the Chinese `wikipedia`, the dumpstatus.json not found!
{ "login": "beyondguo", "id": 37113676, "node_id": "MDQ6VXNlcjM3MTEzNjc2", "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/beyondguo", "html_url": "https://github.com/beyondguo", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "repos_url": "https://api.github.com/users/beyondguo/repos", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-11-01T03:17:55"
"2022-11-02T08:27:15"
"2022-11-02T08:24:29"
NONE
null
### Describe the bug I tried: `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` and `data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')` but both got: `FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json` the full report is: ``` FileNotFoundError Traceback (most recent call last) <ipython-input-13-d07c5021090c> in <module> 1 from datasets import load_dataset 2 ----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s] /opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1740 1741 # Download and prepare data -> 1742 builder_instance.download_and_prepare( 1743 download_config=download_config, 1744 download_mode=download_mode, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 812 **download_and_prepare_kwargs, 813 } --> 814 self._download_and_prepare( 815 dl_manager=dl_manager, 816 verify_infos=verify_infos, /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1645 options=beam_options, 1646 ) -> 1647 super()._download_and_prepare( 1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs 1649 ) # TODO handle verify_infos in beam datasets /opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 881 split_dict = SplitDict(dataset_name=self.name) 882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 884 885 # Checksums verification ~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline) 943 info_url = _base_url(lang) + _INFO_FILE 944 # Use dictionary since testing mock always returns the same result. --> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url}) 946 947 xml_urls = [] /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls) 431 extracted_path(s): `str`, extracted paths of given URL(s). 432 """ --> 433 return self.extract(self.download(url_or_urls)) 434 435 def get_recorded_sizes_checksums(self): /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls) 308 309 start_time = datetime.now() --> 310 downloaded_path_or_paths = map_nested( 311 download_func, 312 url_or_urls, /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc) 427 num_proc = 1 428 if num_proc <= 1 or len(iterable) < parallel_min_length: --> 429 mapped = [ 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 428 if num_proc <= 1 or len(iterable) < parallel_min_length: 429 mapped = [ --> 430 _single_map_nested((function, obj, types, None, True, None)) 431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 432 ] /opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 329 # Singleton first to spare some computation 330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 331 return function(data_struct) 332 333 # Reduce logging to keep things readable in multiprocessing with tqdm /opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config) 335 # append the relative path to the base_path 336 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 337 return cached_path(url_or_filename, download_config=download_config) 338 339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]): /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 186 if is_remote_url(url_or_filename): 187 # URL, so get it from the cache (downloading if necessary) --> 188 output_path = get_from_cache( 189 url_or_filename, 190 cache_dir=cache_dir, /opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 533 ) 534 elif response is not None and response.status_code == 404: --> 535 raise FileNotFoundError(f"Couldn't find file at {url}") 536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 537 if head_error is not None: FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json ``` ### Steps to reproduce the bug `data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')` ### Expected behavior download the data ### Environment info python3.6 latest datasets/transformers version
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5178/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5176/comments
https://api.github.com/repos/huggingface/datasets/issues/5176/events
https://github.com/huggingface/datasets/issues/5176
1,430,214,539
I_kwDODunzps5VP1eL
5,176
prepare dataset for cloud storage doesn't work
{ "login": "largenn", "id": 27285078, "node_id": "MDQ6VXNlcjI3Mjg1MDc4", "avatar_url": "https://avatars.githubusercontent.com/u/27285078?v=4", "gravatar_id": "", "url": "https://api.github.com/users/largenn", "html_url": "https://github.com/largenn", "followers_url": "https://api.github.com/users/largenn/followers", "following_url": "https://api.github.com/users/largenn/following{/other_user}", "gists_url": "https://api.github.com/users/largenn/gists{/gist_id}", "starred_url": "https://api.github.com/users/largenn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/largenn/subscriptions", "organizations_url": "https://api.github.com/users/largenn/orgs", "repos_url": "https://api.github.com/users/largenn/repos", "events_url": "https://api.github.com/users/largenn/events{/privacy}", "received_events_url": "https://api.github.com/users/largenn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2022-10-31T17:28:57"
"2022-11-09T13:45:16"
null
NONE
null
### Describe the bug Following the [documentation](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) and [this PR](https://github.com/huggingface/datasets/pull/4724), I was downloading and storing huggingface dataset to cloud storage. ``` from datasets import load_dataset, load_dataset_builder dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH') dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet") ``` The above code successfully downloaded dataset, however, it returns error from `download_and_prepare`. > Traceback (most recent call last): > File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module> > dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet") > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare > fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths > cls = get_filesystem_class(protocol) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class > register_implementation(protocol, _import_class(bit["class"])) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 257, in _import_class > mod = importlib.import_module(mod) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/importlib/__init__.py", line 127, in import_module > return _bootstrap._gcd_import(name[level:], package, level) > File "<frozen importlib._bootstrap>", line 1030, in _gcd_import > File "<frozen importlib._bootstrap>", line 1007, in _find_and_load > File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked > File "<frozen importlib._bootstrap>", line 680, in _load_unlocked > File "<frozen importlib._bootstrap_external>", line 850, in exec_module > File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed > File "/shared/zhuiai/research/wiki/wiki/gcsfs.py", line 12, in <module> > dataset.download_and_prepare("gs://upgen/dataset/wiki", file_format="parquet") > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/datasets/builder.py", line 671, in download_and_prepare > fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/core.py", line 635, in get_fs_token_paths > cls = get_filesystem_class(protocol) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 234, in get_filesystem_class > register_implementation(protocol, _import_class(bit["class"])) > File "/shared/zhuiai/.conda/envs/wiki/lib/python3.9/site-packages/fsspec/registry.py", line 258, in _import_class > return getattr(mod, name) > AttributeError: partially initialized module 'gcsfs' has no attribute 'GCSFileSystem' (most likely due to a circular import) ### Steps to reproduce the bug 1. pip install datasets==2.6.1 gcsfs==2022.8.2 2. Run the following code will reproduce the issue (change `LOCAL_PATH` and `Bucket_NAME` accordingly) ``` from datasets import load_dataset, load_dataset_builder dataset = load_dataset_builder("wikipedia", "20220301.en", cache_dir='LOCAL_PATH') dataset.download_and_prepare("gs://Bucket_NAME", file_format="parquet") ``` ### Expected behavior Expecting successful downloading dataset and uploading it to cloud storage. ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-25-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5176/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5175/comments
https://api.github.com/repos/huggingface/datasets/issues/5175/events
https://github.com/huggingface/datasets/issues/5175
1,428,696,231
I_kwDODunzps5VKCyn
5,175
Loading an external NER dataset
{ "login": "Taghreed7878", "id": 112555442, "node_id": "U_kgDOBrV1sg", "avatar_url": "https://avatars.githubusercontent.com/u/112555442?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Taghreed7878", "html_url": "https://github.com/Taghreed7878", "followers_url": "https://api.github.com/users/Taghreed7878/followers", "following_url": "https://api.github.com/users/Taghreed7878/following{/other_user}", "gists_url": "https://api.github.com/users/Taghreed7878/gists{/gist_id}", "starred_url": "https://api.github.com/users/Taghreed7878/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Taghreed7878/subscriptions", "organizations_url": "https://api.github.com/users/Taghreed7878/orgs", "repos_url": "https://api.github.com/users/Taghreed7878/repos", "events_url": "https://api.github.com/users/Taghreed7878/events{/privacy}", "received_events_url": "https://api.github.com/users/Taghreed7878/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-10-30T09:31:55"
"2022-11-01T13:15:49"
"2022-11-01T13:15:49"
NONE
null
I need to use huggingface datasets to load a custom dataset similar to conll2003 but with more entities and each the files contain only two columns: word and ner tag. I tried this code snnipet that I found here as an answer to a similar issue: from datasets import Dataset INPUT_COLUMNS = "ID Text NER".split() def read_conll(file): example = {col: [] for col in INPUT_COLUMNS} idx = 0 with open(file) as f: for line in f: if line.startswith("-DOCSTART-") or line == "\n" or not line: if example[next(iter(example))]: yield idx, example idx += 1 example = {col: [] for col in INPUT_COLUMNS} else: row_cols = line.split() for i, col in enumerate(example): example[col] = row_cols[i].rstrip() train = Dataset.from_generator(read_conll, gen_kwargs={"file": "some_path"}) But the following error happened: ValueError: Please pass `features` or at least one example when writing data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5175/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5172/comments
https://api.github.com/repos/huggingface/datasets/issues/5172/events
https://github.com/huggingface/datasets/issues/5172
1,425,523,114
I_kwDODunzps5U98Gq
5,172
Inconsistency behavior between handling local file protocol and other FS protocols
{ "login": "leoleoasd", "id": 37735580, "node_id": "MDQ6VXNlcjM3NzM1NTgw", "avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leoleoasd", "html_url": "https://github.com/leoleoasd", "followers_url": "https://api.github.com/users/leoleoasd/followers", "following_url": "https://api.github.com/users/leoleoasd/following{/other_user}", "gists_url": "https://api.github.com/users/leoleoasd/gists{/gist_id}", "starred_url": "https://api.github.com/users/leoleoasd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leoleoasd/subscriptions", "organizations_url": "https://api.github.com/users/leoleoasd/orgs", "repos_url": "https://api.github.com/users/leoleoasd/repos", "events_url": "https://api.github.com/users/leoleoasd/events{/privacy}", "received_events_url": "https://api.github.com/users/leoleoasd/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2022-10-27T12:03:20"
"2022-10-27T12:05:19"
null
NONE
null
### Describe the bug These lines us used during load_from_disk: ``` if is_remote_filesystem(fs): dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path) else: fs = fsspec.filesystem("file") dest_dataset_dict_path = dataset_dict_path ``` If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`. ### Steps to reproduce the bug ``` import fsspec.core url = "hdfs:///somewhere/MNIST" # url = "file:///somewhere/MNIST" fs, path = fsspec.core.url_to_fs(url) fs.ls(path) # this will always work load_from_disk(path, fs) # only works for local FS load_from_disk(url, fs) # only works for remote FS ``` ### Expected behavior one of `url` or `path` should always work I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since: ``` fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST' fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST' fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST' ``` and ``` fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST") ``` In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too) ### Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.4.205.1**HIDDEN** - Python version: 3.7.10 - PyArrow version: 8.0.0 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5172/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5170/comments
https://api.github.com/repos/huggingface/datasets/issues/5170/events
https://github.com/huggingface/datasets/issues/5170
1,425,301,835
I_kwDODunzps5U9GFL
5,170
[Caching] Deterministic hashing of torch tensors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
0
"2022-10-27T09:15:15"
"2022-11-02T17:18:43"
"2022-11-02T17:18:43"
MEMBER
null
Currently this fails ```python import torch from datasets.fingerprint import Hasher t = torch.tensor([1.]) def func(x): return t + x hash1 = Hasher.hash(func) t = torch.tensor([1.]) hash2 = Hasher.hash(func) assert hash1 == hash2 ``` Also as noticed in https://discuss.huggingface.co/t/dataset-cant-cache-models-outputs/24945, using a model in a `map` function doesn't work well with caching. Indeed the `bert-base-uncased` model has a different hash every time you reload it. Supporting torch tensors may also help in this case. This can be fixed by registering a custom pickling functions for torch tensors - as we did for other objects such as CodeType, FunctionType and Regex in `py_utils.py`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5170/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5165/comments
https://api.github.com/repos/huggingface/datasets/issues/5165/events
https://github.com/huggingface/datasets/issues/5165
1,423,616,677
I_kwDODunzps5U2qql
5,165
Memory explosion when trying to access 4d tensors in datasets cast to torch or np
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2022-10-26T08:14:47"
"2022-10-26T08:14:47"
null
CONTRIBUTOR
null
### Describe the bug When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors. ### Steps to reproduce the bug MWE: ```python from datasets import load_dataset import numpy as np def create_4d_tensor(item): i = item["num_nodes"] item["x_big"] = np.random.rand(i, 2*i, int(i/2), 1) + 1 # we create a big 4d tensor return item if __name__ == "__main__": dataset = load_dataset(path=f"graphs-datasets/PROTEINS") # This works print(dataset["train"].format) print(dataset["train"][0].keys()) dataset = dataset.map( create_4d_tensor, batched=False, writer_batch_size=100, ) # This works print(dataset["train"].format) print(dataset["train"][0].keys()) dataset.set_format("torch") print(dataset["train"].format) # This gets killed :( print(dataset["train"][0].keys()) ``` The problem likely comes from `format_table` [here](https://cs.github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/src/datasets/arrow_dataset.py#L2328) ### Expected behavior No memory explosion when trying to access dataset items after cast. ### Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5165/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5162/comments
https://api.github.com/repos/huggingface/datasets/issues/5162/events
https://github.com/huggingface/datasets/issues/5162
1,422,461,112
I_kwDODunzps5UyQi4
5,162
Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6
{ "login": "Rijgersberg", "id": 8604946, "node_id": "MDQ6VXNlcjg2MDQ5NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8604946?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rijgersberg", "html_url": "https://github.com/Rijgersberg", "followers_url": "https://api.github.com/users/Rijgersberg/followers", "following_url": "https://api.github.com/users/Rijgersberg/following{/other_user}", "gists_url": "https://api.github.com/users/Rijgersberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rijgersberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rijgersberg/subscriptions", "organizations_url": "https://api.github.com/users/Rijgersberg/orgs", "repos_url": "https://api.github.com/users/Rijgersberg/repos", "events_url": "https://api.github.com/users/Rijgersberg/events{/privacy}", "received_events_url": "https://api.github.com/users/Rijgersberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
7
"2022-10-25T13:23:50"
"2022-11-14T08:25:37"
"2022-10-28T05:38:15"
NONE
null
### Describe the bug When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears. It is caused by a transitive dependency conflict between `datasets` and `multiprocess`. ### Steps to reproduce the bug ```bash $ echo "datasets" > requirements.in $ pip install pip-tools $ pip-compile requirements.in Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets==2.6.1->-r requirements.in (line 1)) Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6 Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1 There are incompatible versions in the resolved dependencies: dill<0.3.6 (from datasets==2.6.1->-r requirements.in (line 1)) dill>=0.3.6 (from multiprocess==0.70.14->datasets==2.6.1->-r requirements.in (line 1)) ``` ### Expected behavior A correctly generated file `requirements.txt` with pinned dependencies ### Environment info Tested with versions `2.6.1, 2.6.0, 2.5.2` on Python 3.8 and 3.10 on Ubuntu 20.04LTS and Python 3.10 on MacOS 12.6 (M1).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5162/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5162/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5161/comments
https://api.github.com/repos/huggingface/datasets/issues/5161/events
https://github.com/huggingface/datasets/issues/5161
1,422,371,748
I_kwDODunzps5Ux6uk
5,161
Dataset can’t cache model’s outputs
{ "login": "jongjyh", "id": 37979232, "node_id": "MDQ6VXNlcjM3OTc5MjMy", "avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jongjyh", "html_url": "https://github.com/jongjyh", "followers_url": "https://api.github.com/users/jongjyh/followers", "following_url": "https://api.github.com/users/jongjyh/following{/other_user}", "gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}", "starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions", "organizations_url": "https://api.github.com/users/jongjyh/orgs", "repos_url": "https://api.github.com/users/jongjyh/repos", "events_url": "https://api.github.com/users/jongjyh/events{/privacy}", "received_events_url": "https://api.github.com/users/jongjyh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-10-25T12:19:00"
"2022-11-03T16:12:52"
"2022-11-03T16:12:51"
NONE
null
### Describe the bug Hi, I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this? ### Steps to reproduce the bug 1. run below code 2. get different hash ``` from transformers import BertModel from transformers import AutoTokenizer import torch token = ['hello'] model = BertModel.from_pretrained("bert-base-uncased").eval() tok = AutoTokenizer.from_pretrained("bert-base-uncased") def abcd(): with torch.no_grad(): out = model(**tok(token,return_tensors='pt'))[0] # out = tok(token) return out from datasets.fingerprint import Hasher my_func = abcd print(Hasher.hash(my_func)) print(abcd()) ``` ### Expected behavior I wanna cache all the model output ### Environment info datasets:2.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5161/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5160/comments
https://api.github.com/repos/huggingface/datasets/issues/5160/events
https://github.com/huggingface/datasets/issues/5160
1,422,193,938
I_kwDODunzps5UxPUS
5,160
Automatically add filename for image/audio folder
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }, { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
10
"2022-10-25T09:56:49"
"2022-10-26T16:51:46"
null
MEMBER
null
### Feature request When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both: a) Automatically displayed in the viewer b) Automatically added as a column to the dataset when doing `load_dataset` In `diffusers` our test rely quite heavily on images and audio files now and it's a bit tedious at the moment to download specific images from a datasets repo. E.g. we have a dataset of images for tests in `diffusers`: https://huggingface.co/datasets/hf-internal-testing/diffusers-images where it would be extremely nice to have direct access to the filename both visually on the datasets page (@severo ) as well as via the `load_datasets` function. We currently have some akward functionality to download images by path name: https://github.com/huggingface/diffusers/blob/2fb8fafa4b761f6fc144cf75a6f6f0ea6af3a1c1/src/diffusers/utils/testing_utils.py#L131 It would be much nicer to just go over `load_dataset(...)` ### Motivation Intuitively the filename is something people understand directly. E.g if you upload a folder of images online, it's nice if you recognize the image as well as the filename next to it directly and that you're able to use it right away. The label on the other hand is less intuitive to understand as you haven't added it yourself. ### Your contribution Not sure if I have the time to add it myself anytime soon, but it would help us a lot for `diffusers`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5160/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5158/comments
https://api.github.com/repos/huggingface/datasets/issues/5158/events
https://github.com/huggingface/datasets/issues/5158
1,422,059,287
I_kwDODunzps5UwucX
5,158
Fix language and license tag names in all Hub datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
6
"2022-10-25T08:19:29"
"2022-10-25T11:27:26"
"2022-10-25T10:42:19"
MEMBER
null
While working on this: - #5137 we realized there are still many datasets with deprecated "languages" and "licenses" tag names (instead of "language" and "license"). This is a blocking issue: no subsequent PR can be opened to modify their metadata: a ValueError will be thrown. We should fix the "language" and "license" tag names in all Hub datasets. TODO: - [x] Fix language and license tag names in 402 Hub datasets CC: @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5158/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5157/comments
https://api.github.com/repos/huggingface/datasets/issues/5157/events
https://github.com/huggingface/datasets/issues/5157
1,421,703,577
I_kwDODunzps5UvXmZ
5,157
Consistent caching between python and jupyter
{ "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false }
[ { "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-10-25T01:34:33"
"2022-11-02T15:43:22"
"2022-11-02T15:43:22"
CONTRIBUTOR
null
### Feature request I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch. If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour? ### Motivation If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again. ### Your contribution I am happy to try a PR if you give me some pointers where the changes should happen
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5157/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5156/comments
https://api.github.com/repos/huggingface/datasets/issues/5156/events
https://github.com/huggingface/datasets/issues/5156
1,421,667,125
I_kwDODunzps5UvOs1
5,156
Unable to download dataset using Azure Data Lake Gen 2
{ "login": "clarissesimoes", "id": 87379512, "node_id": "MDQ6VXNlcjg3Mzc5NTEy", "avatar_url": "https://avatars.githubusercontent.com/u/87379512?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clarissesimoes", "html_url": "https://github.com/clarissesimoes", "followers_url": "https://api.github.com/users/clarissesimoes/followers", "following_url": "https://api.github.com/users/clarissesimoes/following{/other_user}", "gists_url": "https://api.github.com/users/clarissesimoes/gists{/gist_id}", "starred_url": "https://api.github.com/users/clarissesimoes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clarissesimoes/subscriptions", "organizations_url": "https://api.github.com/users/clarissesimoes/orgs", "repos_url": "https://api.github.com/users/clarissesimoes/repos", "events_url": "https://api.github.com/users/clarissesimoes/events{/privacy}", "received_events_url": "https://api.github.com/users/clarissesimoes/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-10-25T00:43:18"
"2022-11-17T23:37:09"
"2022-11-17T23:37:08"
NONE
null
### Describe the bug When using the DatasetBuilder method with the credentials for the cloud storage Azure Data Lake (adl) Gen2, the following error is showed: ``` Traceback (most recent call last): File "download_hf_dataset.py", line 143, in <module> main() File "download_hf_dataset.py", line 102, in main builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/datasets/builder.py", line 671, in download_and_prepare fs_token_paths = fsspec.get_fs_token_paths(output_dir, storage_options=storage_options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/core.py", line 639, in get_fs_token_paths fs = cls(**options) File "/home/clarisses/miniconda3/envs/hf_datasets_env/lib/python3.8/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) TypeError: __init__() got an unexpected keyword argument 'account_name' ``` If I don't pass the storage_options argument (leave it as None), it requires the credentials used in ADL Gen 1: `TypeError: __init__() missing 3 required positional arguments: 'tenant_id', 'client_id', and 'client_secret'` Thus, it is not possible to download a dataset from the cloud using Azure Data Lake (adl) Gen2. ### Steps to reproduce the bug Assuming that you have an account on Azure and at Storage Account that can be used for reproduce: 1. Create a dict with the format to connect to Azure Data Lake Gen 2 ``` storage_options = {"account_name": ACCOUNT_NAME, "account_key": ACCOUNT_KEY) # gen 2 filesystem ``` 2. Create a dataset builder for any HF hosted dataset ``` builder = load_dataset_builder(dataset_name) ``` 3. Try to download the dataset passing the storage_options as an argument ``` save_dir = 'adl://my_save_dir' builder.download_and_prepare(save_dir, storage_options=storage_options, max_shard_size="250MB", file_format="parquet") ``` ### Expected behavior Not seeing the error mentioned above and being able to download the dataset to the provided path on ADL ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5156/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5153/comments
https://api.github.com/repos/huggingface/datasets/issues/5153/events
https://github.com/huggingface/datasets/issues/5153
1,420,833,457
I_kwDODunzps5UsDKx
5,153
default Image/AudioFolder infers labels when there is no metadata files even if there is only one dir
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-10-24T13:28:18"
"2022-11-15T16:31:10"
"2022-11-15T16:31:09"
CONTRIBUTOR
null
### Describe the bug By default FolderBasedBuilder infers labels if there is not metadata files, even if it's meaningless (for example, they are in a single directory or in the root folder, see this repo as an example: https://huggingface.co/datasets/patrickvonplaten/audios As this is a corner case for quick exploration of images or audios on the Hub. ### Steps to reproduce the bug If you have directory like this: ``` repo image1.jpg image2.jpg image3.jpg ``` or ``` repo data image1.jpg image2.jpg image3.jpg ``` doing `ds = load_dataset(repo)` would create `label` feature: ```python print(ds["train"][0]) >> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 0} ``` Also, if you have the following structure: ``` repo data image1.jpg image2.jpg image3.jpg image4.jpg image5.jpg image6.jpg ``` it will infer two labels: ```python print(ds["train"][0]) print(ds["train"][-1]) >> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x375 at 0x7FB5326468E0>, 'label': 1} >> {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x415 at 0x7FB5326555B0>, 'label': 0} ``` ### Expected behavior We should have only one base feature (Image/Audio) in such cases. ### Environment info all versions of `datasets`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5153/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5152/comments
https://api.github.com/repos/huggingface/datasets/issues/5152/events
https://github.com/huggingface/datasets/issues/5152
1,420,808,919
I_kwDODunzps5Ur9LX
5,152
refactor FolderBasedBuilder and Image/AudioFolder tests
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 2851292821, "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring", "name": "refactoring", "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior" } ]
open
false
null
[]
null
0
"2022-10-24T13:11:52"
"2022-10-24T13:11:52"
null
CONTRIBUTOR
null
Tests for FolderBasedBuilder, ImageFolder and AudioFolder are mostly duplicating each other. They need to be refactored and Audio/ImageFolder should have only tests specific to the loader.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5152/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5151/comments
https://api.github.com/repos/huggingface/datasets/issues/5151/events
https://github.com/huggingface/datasets/issues/5151
1,420,791,163
I_kwDODunzps5Ur417
5,151
Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?)
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-10-24T12:59:18"
"2022-11-04T14:55:20"
null
CONTRIBUTOR
null
Now one can push only different splits within one default config of a dataset. Would be nice to allow something like: ``` ds.push_to_hub(repo_name, config=config_name) ``` I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5151/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5151/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5150/comments
https://api.github.com/repos/huggingface/datasets/issues/5150/events
https://github.com/huggingface/datasets/issues/5150
1,420,684,999
I_kwDODunzps5Ure7H
5,150
Problems after upgrading to 2.6.1
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
8
"2022-10-24T11:32:36"
"2023-01-03T15:26:00"
null
NONE
null
### Describe the bug Loading a dataset_dict from disk with `load_from_disk` is now creating a `KeyError "length"` that was not occurring in v2.5.2. Context: - Each individual dataset in the dict is created with `Dataset.from_pandas` - The dataset_dict is create from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds}) - The pandas dataframe, besides text columns, has a column with a dictionary inside and potentially different keys in each row. Correctly the `Dataset.from_pandas` function adds `key: None` to all dictionaries in each row so that the schema can be correctly inferred. ### Steps to reproduce the bug Steps to reproduce: - Upgrade to datasets==2.6.1 - Create a dataset from pandas dataframe with `Dataset.from_pandas` - Create a dataset_dict from a dict of `Dataset`s, e.g., `DatasetDict({"train": train_ds, "validation": val_ds}) - Save to disk with the `save` function ### Expected behavior Same as in v2.5.2, that is load from disk without errors ### Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.4.209-129.367.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5150/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5148/comments
https://api.github.com/repos/huggingface/datasets/issues/5148/events
https://github.com/huggingface/datasets/issues/5148
1,420,219,222
I_kwDODunzps5UptNW
5,148
Cannot find the rvl_cdip dataset
{ "login": "santule", "id": 20509836, "node_id": "MDQ6VXNlcjIwNTA5ODM2", "avatar_url": "https://avatars.githubusercontent.com/u/20509836?v=4", "gravatar_id": "", "url": "https://api.github.com/users/santule", "html_url": "https://github.com/santule", "followers_url": "https://api.github.com/users/santule/followers", "following_url": "https://api.github.com/users/santule/following{/other_user}", "gists_url": "https://api.github.com/users/santule/gists{/gist_id}", "starred_url": "https://api.github.com/users/santule/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santule/subscriptions", "organizations_url": "https://api.github.com/users/santule/orgs", "repos_url": "https://api.github.com/users/santule/repos", "events_url": "https://api.github.com/users/santule/events{/privacy}", "received_events_url": "https://api.github.com/users/santule/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-10-24T04:57:42"
"2022-10-24T12:23:47"
"2022-10-24T06:25:28"
NONE
null
Hi, I am trying to use load_dataset to load the official "rvl_cdip" dataset but getting an error. dataset = load_dataset("rvl_cdip") Couldn't find 'rvl_cdip' on the Hugging Face Hub either: FileNotFoundError: Couldn't find the file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/rvl_cdip/rvl_cdip.py Regards,
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5148/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5148/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5147/comments
https://api.github.com/repos/huggingface/datasets/issues/5147/events
https://github.com/huggingface/datasets/issues/5147
1,419,522,275
I_kwDODunzps5UnDDj
5,147
Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting
{ "login": "falcaopetri", "id": 8387736, "node_id": "MDQ6VXNlcjgzODc3MzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8387736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/falcaopetri", "html_url": "https://github.com/falcaopetri", "followers_url": "https://api.github.com/users/falcaopetri/followers", "following_url": "https://api.github.com/users/falcaopetri/following{/other_user}", "gists_url": "https://api.github.com/users/falcaopetri/gists{/gist_id}", "starred_url": "https://api.github.com/users/falcaopetri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/falcaopetri/subscriptions", "organizations_url": "https://api.github.com/users/falcaopetri/orgs", "repos_url": "https://api.github.com/users/falcaopetri/repos", "events_url": "https://api.github.com/users/falcaopetri/events{/privacy}", "received_events_url": "https://api.github.com/users/falcaopetri/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
4
"2022-10-22T21:46:38"
"2022-11-01T22:19:07"
null
NONE
null
### Feature request `dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint. I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing. Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700). ### Motivation This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680. Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output: ```python def fn(example, verbose=False): ... ``` Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching. I'm not sure if other methods in the `Dataset` API could benefit from this feature. ### Your contribution Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation. I could contribute with a PR if this feature and approach look good to you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5147/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5145/comments
https://api.github.com/repos/huggingface/datasets/issues/5145/events
https://github.com/huggingface/datasets/issues/5145
1,418,005,452
I_kwDODunzps5UhQvM
5,145
Dataset order is not deterministic with ZIP archives and `iter_files`
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
8
"2022-10-21T09:00:03"
"2022-10-27T09:51:49"
"2022-10-27T09:51:10"
CONTRIBUTOR
null
### Describe the bug For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order. ### Steps to reproduce the bug In a clean docker container or conda environment with datasets==2.6.1, run ```python from datasets import load_dataset from pprint import pprint data = load_dataset("beans", split="validation") pprint(data["image_file_path"]) ``` ### Expected behavior The order of the images is the same on all machines. ### Environment info On the EC2 instance: ``` - `datasets` version: 2.6.1 - Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 - Numpy version: not checked ``` On my local laptop: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.3.5 - Numpy version: 1.23.1 ``` On github actions: ``` - `datasets` version: 2.6.1 - Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5 - Python version: 3.8.14 - PyArrow version: 9.0.0 - Pandas version: 1.5.1 - Numpy version: 1.23.4 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5145/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5144/comments
https://api.github.com/repos/huggingface/datasets/issues/5144/events
https://github.com/huggingface/datasets/issues/5144
1,417,974,731
I_kwDODunzps5UhJPL
5,144
Inconsistent documentation on map remove_columns
{ "login": "zhaowei-wang-nlp", "id": 22047467, "node_id": "MDQ6VXNlcjIyMDQ3NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaowei-wang-nlp", "html_url": "https://github.com/zhaowei-wang-nlp", "followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers", "following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs", "repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos", "events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
null
[]
null
3
"2022-10-21T08:37:53"
"2022-11-15T14:15:10"
"2022-11-15T14:15:10"
NONE
null
### Describe the bug The page [process](https://huggingface.co/docs/datasets/process) says this about the parameter `remove_columns` of the function `map`: When you remove a column, it is only removed after the example has been provided to the mapped function. So it seems that the `remove_columns` parameter removes after the mapped functions. However, another page, [the documentation of the function map](https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.Dataset.map.remove_columns) says: Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in remove_columns, these columns will be kept. So one page says "after the mapped function" and another says "before the mapped function." Is there something wrong? ### Steps to reproduce the bug Not about code. ### Expected behavior consistent about the descriptions of the behavior of the parameter `remove_columns` in the function `map`. ### Environment info datasets V2.6.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5144/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5143/comments
https://api.github.com/repos/huggingface/datasets/issues/5143/events
https://github.com/huggingface/datasets/issues/5143
1,416,837,186
I_kwDODunzps5UczhC
5,143
DownloadManager Git LFS support
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
2
"2022-10-20T15:29:29"
"2022-10-20T17:17:10"
"2022-10-20T17:17:10"
CONTRIBUTOR
null
### Feature request Maybe I'm mistaken but the `DownloadManager` does not support extracting git lfs files out of the box right? Using `dl_manager.download()` or `dl_manager.download_and_extract()` still returns lfs files afaict. Is there a good way to write a dataset loading script for a repo with lfs files? ### Motivation / ### Your contribution /
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5143/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5143/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5137/comments
https://api.github.com/repos/huggingface/datasets/issues/5137/events
https://github.com/huggingface/datasets/issues/5137
1,414,642,723
I_kwDODunzps5UUbwj
5,137
Align task tags in dataset metadata
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
14
"2022-10-19T09:41:42"
"2022-11-10T05:25:58"
"2022-10-25T06:17:00"
MEMBER
null
## Describe Once we have agreed on a common naming for task tags for all open source projects, we should align on them. ## Steps - [x] Align task tags in canonical datasets - [x] task_categories: 4 datasets - [x] task_ids (by @lhoestq) - [x] Open PRs in community datasets - [x] task_categories: 451 datasets - [x] task_ids: 556 datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5137/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5135/comments
https://api.github.com/repos/huggingface/datasets/issues/5135/events
https://github.com/huggingface/datasets/issues/5135
1,414,413,519
I_kwDODunzps5UTjzP
5,135
Update docs once dataset scripts transferred to the Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2022-10-19T06:58:19"
"2022-10-20T08:10:01"
"2022-10-20T08:10:01"
MEMBER
null
## Describe the bug As discussed in: - https://github.com/huggingface/hub-docs/pull/423#pullrequestreview-1146083701 we should update our docs once dataset scripts have been transferred to the Hub (and removed from GitHub): - #4974 Concretely: - [x] Datasets on GitHub (legacy): https://huggingface.co/docs/datasets/main/en/share#datasets-on-github-legacy - [x] ADD_NEW_DATASET: https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md - ... This PR complements the work of: - #5067 This PR is a follow-up of PRs: - #3777 CC: @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5135/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5134/comments
https://api.github.com/repos/huggingface/datasets/issues/5134/events
https://github.com/huggingface/datasets/issues/5134
1,413,623,687
I_kwDODunzps5UQi-H
5,134
Raise ImportError instead of OSError if required extraction library is not installed
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "ayushthe1", "id": 114604338, "node_id": "U_kgDOBtS5Mg", "avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushthe1", "html_url": "https://github.com/ayushthe1", "followers_url": "https://api.github.com/users/ayushthe1/followers", "following_url": "https://api.github.com/users/ayushthe1/following{/other_user}", "gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions", "organizations_url": "https://api.github.com/users/ayushthe1/orgs", "repos_url": "https://api.github.com/users/ayushthe1/repos", "events_url": "https://api.github.com/users/ayushthe1/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushthe1/received_events", "type": "User", "site_admin": false }
[ { "login": "ayushthe1", "id": 114604338, "node_id": "U_kgDOBtS5Mg", "avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushthe1", "html_url": "https://github.com/ayushthe1", "followers_url": "https://api.github.com/users/ayushthe1/followers", "following_url": "https://api.github.com/users/ayushthe1/following{/other_user}", "gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions", "organizations_url": "https://api.github.com/users/ayushthe1/orgs", "repos_url": "https://api.github.com/users/ayushthe1/repos", "events_url": "https://api.github.com/users/ayushthe1/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushthe1/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-10-18T17:53:46"
"2022-10-25T15:56:59"
"2022-10-25T15:56:59"
CONTRIBUTOR
null
According to the official Python docs, `OSError` should be thrown in the following situations: > This exception is raised when a system function returns a system-related error, including I/O failures such as β€œfile not found” or β€œdisk full” (not for illegal argument types or other incidental errors). Hence, it makes more sense to raise `ImportError` instead of `OSError` when the required extraction/decompression library is not installed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5134/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5134/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5133/comments
https://api.github.com/repos/huggingface/datasets/issues/5133/events
https://github.com/huggingface/datasets/issues/5133
1,413,623,462
I_kwDODunzps5UQi6m
5,133
Tensor operation not functioning in dataset mapping
{ "login": "xinghaow99", "id": 50691954, "node_id": "MDQ6VXNlcjUwNjkxOTU0", "avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xinghaow99", "html_url": "https://github.com/xinghaow99", "followers_url": "https://api.github.com/users/xinghaow99/followers", "following_url": "https://api.github.com/users/xinghaow99/following{/other_user}", "gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}", "starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions", "organizations_url": "https://api.github.com/users/xinghaow99/orgs", "repos_url": "https://api.github.com/users/xinghaow99/repos", "events_url": "https://api.github.com/users/xinghaow99/events{/privacy}", "received_events_url": "https://api.github.com/users/xinghaow99/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2022-10-18T17:53:35"
"2022-10-19T04:15:45"
"2022-10-19T04:15:44"
NONE
null
## Describe the bug I'm doing a torch.mean() operation in data preprocessing, and it's not working. ## Steps to reproduce the bug ``` from transformers import pipeline import torch import numpy as np from datasets import load_dataset device = 'cuda:0' raw_dataset = load_dataset("glue", "sst2") feature_extraction = pipeline('feature-extraction', 'bert-base-uncased', device=device) def extracted_data(examples): # feature = torch.tensor(feature_extraction(examples['sentence'], batch_size=16), device=device) # feature = torch.mean(feature, dim=1) feature = np.asarray(feature_extraction(examples['sentence'], batch_size=16)).squeeze().mean(1) print(feature.shape) return {'feature': feature} extracted_dataset = raw_dataset.map(extracted_data, batched=True, batch_size=16) ``` ## Results When running with torch.mean(), the shape printed out is [16, seq_len, 768], which is exactly the same before the operation. While numpy works just fine, which gives [16, 768]. ## Environment info - `datasets` version: 2.6.1 - Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.31 - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5133/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5132/comments
https://api.github.com/repos/huggingface/datasets/issues/5132/events
https://github.com/huggingface/datasets/issues/5132
1,413,607,306
I_kwDODunzps5UQe-K
5,132
Depracate `num_proc` parameter in `DownloadManager.extract`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "ayushthe1", "id": 114604338, "node_id": "U_kgDOBtS5Mg", "avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushthe1", "html_url": "https://github.com/ayushthe1", "followers_url": "https://api.github.com/users/ayushthe1/followers", "following_url": "https://api.github.com/users/ayushthe1/following{/other_user}", "gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions", "organizations_url": "https://api.github.com/users/ayushthe1/orgs", "repos_url": "https://api.github.com/users/ayushthe1/repos", "events_url": "https://api.github.com/users/ayushthe1/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushthe1/received_events", "type": "User", "site_admin": false }
[ { "login": "ayushthe1", "id": 114604338, "node_id": "U_kgDOBtS5Mg", "avatar_url": "https://avatars.githubusercontent.com/u/114604338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayushthe1", "html_url": "https://github.com/ayushthe1", "followers_url": "https://api.github.com/users/ayushthe1/followers", "following_url": "https://api.github.com/users/ayushthe1/following{/other_user}", "gists_url": "https://api.github.com/users/ayushthe1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayushthe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayushthe1/subscriptions", "organizations_url": "https://api.github.com/users/ayushthe1/orgs", "repos_url": "https://api.github.com/users/ayushthe1/repos", "events_url": "https://api.github.com/users/ayushthe1/events{/privacy}", "received_events_url": "https://api.github.com/users/ayushthe1/received_events", "type": "User", "site_admin": false } ]
null
5
"2022-10-18T17:41:05"
"2022-10-25T15:56:46"
"2022-10-25T15:56:46"
CONTRIBUTOR
null
The `num_proc` parameter is only present in `DownloadManager.extract` but not in `StreamingDownloadManager.extract`, making it impossible to support streaming in the dataset scripts that use it (`openwebtext` and `the_pile_stack_exchange`). We can avoid this situation by deprecating this parameter and passing `DownloadConfig`'s `num_proc` to `map_nested` instead, as it's done in `DownloadManager.download`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5132/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5131/comments
https://api.github.com/repos/huggingface/datasets/issues/5131/events
https://github.com/huggingface/datasets/issues/5131
1,413,534,863
I_kwDODunzps5UQNSP
5,131
WikiText 103 tokenizer hangs
{ "login": "TrentBrick", "id": 12433427, "node_id": "MDQ6VXNlcjEyNDMzNDI3", "avatar_url": "https://avatars.githubusercontent.com/u/12433427?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TrentBrick", "html_url": "https://github.com/TrentBrick", "followers_url": "https://api.github.com/users/TrentBrick/followers", "following_url": "https://api.github.com/users/TrentBrick/following{/other_user}", "gists_url": "https://api.github.com/users/TrentBrick/gists{/gist_id}", "starred_url": "https://api.github.com/users/TrentBrick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TrentBrick/subscriptions", "organizations_url": "https://api.github.com/users/TrentBrick/orgs", "repos_url": "https://api.github.com/users/TrentBrick/repos", "events_url": "https://api.github.com/users/TrentBrick/events{/privacy}", "received_events_url": "https://api.github.com/users/TrentBrick/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
0
"2022-10-18T16:44:00"
"2022-10-18T16:44:00"
null
NONE
null
See issue here: https://github.com/huggingface/transformers/issues/19702
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5131/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5131/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5129/comments
https://api.github.com/repos/huggingface/datasets/issues/5129/events
https://github.com/huggingface/datasets/issues/5129
1,413,031,664
I_kwDODunzps5UOSbw
5,129
unexpected `cast` or `class_encode_column` result after `rename_column`
{ "login": "quaeast", "id": 35144675, "node_id": "MDQ6VXNlcjM1MTQ0Njc1", "avatar_url": "https://avatars.githubusercontent.com/u/35144675?v=4", "gravatar_id": "", "url": "https://api.github.com/users/quaeast", "html_url": "https://github.com/quaeast", "followers_url": "https://api.github.com/users/quaeast/followers", "following_url": "https://api.github.com/users/quaeast/following{/other_user}", "gists_url": "https://api.github.com/users/quaeast/gists{/gist_id}", "starred_url": "https://api.github.com/users/quaeast/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/quaeast/subscriptions", "organizations_url": "https://api.github.com/users/quaeast/orgs", "repos_url": "https://api.github.com/users/quaeast/repos", "events_url": "https://api.github.com/users/quaeast/events{/privacy}", "received_events_url": "https://api.github.com/users/quaeast/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
"2022-10-18T11:15:24"
"2022-10-19T03:02:26"
"2022-10-19T03:02:26"
NONE
null
## Describe the bug When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("amazon_reviews_multi", "en") data = dataset['train'] data = data.remove_columns( [ "review_id", "product_id", "reviewer_id", "review_title", "language", "product_category", ] ) data = data.rename_column("review_body", "text") data1 = data.class_encode_column("stars") print(set(data1.data.columns[0])) # output: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>} data = data.rename_column("stars", "label") print(set(data.data.columns[0])) # output: {<pyarrow.Int32Scalar: 5>, <pyarrow.Int32Scalar: 4>, <pyarrow.Int32Scalar: 1>, <pyarrow.Int32Scalar: 3>, <pyarrow.Int32Scalar: 2>} data2 = data.class_encode_column("label") print(set(data2.data.columns[0])) # output: {<pyarrow.Int64Scalar: 0>} ``` ## Expected results the last print should be: {<pyarrow.Int64Scalar: 4>, <pyarrow.Int64Scalar: 2>, <pyarrow.Int64Scalar: 3>, <pyarrow.Int64Scalar: 0>, <pyarrow.Int64Scalar: 1>} ## Actual results but it output: {<pyarrow.Int64Scalar: 0>} ## Environment info - `datasets` version: 2.6.1 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5129/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5123/comments
https://api.github.com/repos/huggingface/datasets/issues/5123/events
https://github.com/huggingface/datasets/issues/5123
1,410,828,756
I_kwDODunzps5UF4nU
5,123
datasets freezes with streaming mode in multiple-gpu
{ "login": "jackfeinmann5", "id": 59409879, "node_id": "MDQ6VXNlcjU5NDA5ODc5", "avatar_url": "https://avatars.githubusercontent.com/u/59409879?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jackfeinmann5", "html_url": "https://github.com/jackfeinmann5", "followers_url": "https://api.github.com/users/jackfeinmann5/followers", "following_url": "https://api.github.com/users/jackfeinmann5/following{/other_user}", "gists_url": "https://api.github.com/users/jackfeinmann5/gists{/gist_id}", "starred_url": "https://api.github.com/users/jackfeinmann5/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jackfeinmann5/subscriptions", "organizations_url": "https://api.github.com/users/jackfeinmann5/orgs", "repos_url": "https://api.github.com/users/jackfeinmann5/repos", "events_url": "https://api.github.com/users/jackfeinmann5/events{/privacy}", "received_events_url": "https://api.github.com/users/jackfeinmann5/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
10
"2022-10-17T03:28:16"
"2023-01-16T10:54:44"
null
NONE
null
## Describe the bug Hi. I am using this dataloader, which is for processing large datasets in streaming mode mentioned in one of examples of huggingface. I am using it to read c4: https://github.com/huggingface/transformers/blob/b48ac1a094e572d6076b46a9e4ed3e0ebe978afc/examples/research_projects/codeparrot/scripts/codeparrot_training.py#L22 During using multi-gpu in accelerator in one node, the code freezes, but works for 1 GPU: ``` 10/16/2022 14:18:46 - INFO - datasets.info - Loading Dataset Infos from /home/jack/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 Steps: 0%| | 0/400000 [00:00<?, ?it/s]10/16/2022 14:18:47 - INFO - torch.utils.data.dataloader - Shared seed (135290893754684706) sent to store on rank 0 ``` # Code to reproduce please run this code with `accelerate launch code.py` ``` from accelerate import Accelerator from accelerate.logging import get_logger from datasets import load_dataset from torch.utils.data.dataloader import DataLoader import torch from datasets import load_dataset from transformers import AutoTokenizer import torch from accelerate.logging import get_logger from torch.utils.data import IterableDataset from torch.utils.data.datapipes.iter.combinatorics import ShufflerIterDataPipe logger = get_logger(__name__) class ConstantLengthDataset(IterableDataset): """ Iterable dataset that returns constant length chunks of tokens from stream of text files. Args: tokenizer (Tokenizer): The processor used for proccessing the data. dataset (dataset.Dataset): Dataset with text files. infinite (bool): If True the iterator is reset after dataset reaches end else stops. max_seq_length (int): Length of token sequences to return. num_of_sequences (int): Number of token sequences to keep in buffer. chars_per_token (int): Number of characters per token used to estimate number of tokens in text buffer. """ def __init__( self, tokenizer, dataset, infinite=False, max_seq_length=1024, num_of_sequences=1024, chars_per_token=3.6, ): self.tokenizer = tokenizer # self.concat_token_id = tokenizer.bos_token_id self.dataset = dataset self.max_seq_length = max_seq_length self.epoch = 0 self.infinite = infinite self.current_size = 0 self.max_buffer_size = max_seq_length * chars_per_token * num_of_sequences self.content_field = "text" def __iter__(self): iterator = iter(self.dataset) more_examples = True while more_examples: buffer, buffer_len = [], 0 while True: if buffer_len >= self.max_buffer_size: break try: buffer.append(next(iterator)[self.content_field]) buffer_len += len(buffer[-1]) except StopIteration: if self.infinite: iterator = iter(self.dataset) self.epoch += 1 logger.info(f"Dataset epoch: {self.epoch}") else: more_examples = False break tokenized_inputs = self.tokenizer(buffer, truncation=False)["input_ids"] all_token_ids = [] for tokenized_input in tokenized_inputs: all_token_ids.extend(tokenized_input) for i in range(0, len(all_token_ids), self.max_seq_length): input_ids = all_token_ids[i : i + self.max_seq_length] if len(input_ids) == self.max_seq_length: self.current_size += 1 yield torch.tensor(input_ids) def shuffle(self, buffer_size=1000): return ShufflerIterDataPipe(self, buffer_size=buffer_size) def create_dataloaders(tokenizer, accelerator): ds_kwargs = {"streaming": True} # In distributed training, the load_dataset function gaurantees that only one process # can concurrently download the dataset. datasets = load_dataset( "c4", "en", cache_dir="cache_dir", **ds_kwargs, ) train_data, valid_data = datasets["train"], datasets["validation"] with accelerator.main_process_first(): train_data = train_data.shuffle(buffer_size=10000, seed=None) train_dataset = ConstantLengthDataset( tokenizer, train_data, infinite=True, max_seq_length=256, ) valid_dataset = ConstantLengthDataset( tokenizer, valid_data, infinite=False, max_seq_length=256, ) train_dataset = train_dataset.shuffle(buffer_size=10000) train_dataloader = DataLoader(train_dataset, batch_size=160, shuffle=True) eval_dataloader = DataLoader(valid_dataset, batch_size=160) return train_dataloader, eval_dataloader def main(): # Accelerator. logging_dir = "data_save_dir/log" accelerator = Accelerator( gradient_accumulation_steps=1, mixed_precision="bf16", log_with="tensorboard", logging_dir=logging_dir, ) # We need to initialize the trackers we use, and also store our configuration. # The trackers initializes automatically on the main process. if accelerator.is_main_process: accelerator.init_trackers("test") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # Load datasets and create dataloaders. train_dataloader, _ = create_dataloaders(tokenizer, accelerator) train_dataloader = accelerator.prepare(train_dataloader) for step, batch in enumerate(train_dataloader, start=1): print(step) accelerator.end_training() if __name__ == "__main__": main() ``` ## Results expected Being able to run the code for streamining datasets with multi-gpu ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: linux - Python version: 3.9.12 - PyArrow version: 9.0.0 @lhoestq I do not have any idea why this freezing happens, and I removed the streaming mode and this was working fine, so I know this is caused by streaming mode of the dataloader part not working well with multi-gpu setting. Since datasets are large, I hope to keep the streamining mode. I very much appreciate your help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5123/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5118/comments
https://api.github.com/repos/huggingface/datasets/issues/5118/events
https://github.com/huggingface/datasets/issues/5118
1,410,547,373
I_kwDODunzps5UEz6t
5,118
Installing `datasets` on M1 computers
{ "login": "david1542", "id": 9879252, "node_id": "MDQ6VXNlcjk4NzkyNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david1542", "html_url": "https://github.com/david1542", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "organizations_url": "https://api.github.com/users/david1542/orgs", "repos_url": "https://api.github.com/users/david1542/repos", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "received_events_url": "https://api.github.com/users/david1542/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-10-16T16:50:08"
"2022-10-19T09:10:08"
"2022-10-19T09:10:08"
CONTRIBUTOR
null
## Describe the bug I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`. On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1? ## Steps to reproduce the bug Fresh clone this project (on m1), create a virtualenv and run this: ```python pip install -e ".[dev]" ``` ## Expected results Installation should be smooth, and all the dependencies should be installed on M1. ## Actual results You should receive an error, saying pip couldn't find a version that matches this pattern: ``` tensorflow>=2.3,!=2.6.0,!=2.6.1 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.2.dev0 - Platform: macOS-12.6-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5118/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5117/comments
https://api.github.com/repos/huggingface/datasets/issues/5117/events
https://github.com/huggingface/datasets/issues/5117
1,409,571,346
I_kwDODunzps5UBFoS
5,117
Progress bars have color red and never completed to 100%
{ "login": "echatzikyriakidis", "id": 63857529, "node_id": "MDQ6VXNlcjYzODU3NTI5", "avatar_url": "https://avatars.githubusercontent.com/u/63857529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/echatzikyriakidis", "html_url": "https://github.com/echatzikyriakidis", "followers_url": "https://api.github.com/users/echatzikyriakidis/followers", "following_url": "https://api.github.com/users/echatzikyriakidis/following{/other_user}", "gists_url": "https://api.github.com/users/echatzikyriakidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/echatzikyriakidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/echatzikyriakidis/subscriptions", "organizations_url": "https://api.github.com/users/echatzikyriakidis/orgs", "repos_url": "https://api.github.com/users/echatzikyriakidis/repos", "events_url": "https://api.github.com/users/echatzikyriakidis/events{/privacy}", "received_events_url": "https://api.github.com/users/echatzikyriakidis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "david1542", "id": 9879252, "node_id": "MDQ6VXNlcjk4NzkyNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david1542", "html_url": "https://github.com/david1542", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "organizations_url": "https://api.github.com/users/david1542/orgs", "repos_url": "https://api.github.com/users/david1542/repos", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "received_events_url": "https://api.github.com/users/david1542/received_events", "type": "User", "site_admin": false }
[ { "login": "david1542", "id": 9879252, "node_id": "MDQ6VXNlcjk4NzkyNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/9879252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david1542", "html_url": "https://github.com/david1542", "followers_url": "https://api.github.com/users/david1542/followers", "following_url": "https://api.github.com/users/david1542/following{/other_user}", "gists_url": "https://api.github.com/users/david1542/gists{/gist_id}", "starred_url": "https://api.github.com/users/david1542/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david1542/subscriptions", "organizations_url": "https://api.github.com/users/david1542/orgs", "repos_url": "https://api.github.com/users/david1542/repos", "events_url": "https://api.github.com/users/david1542/events{/privacy}", "received_events_url": "https://api.github.com/users/david1542/received_events", "type": "User", "site_admin": false } ]
null
4
"2022-10-14T16:12:30"
"2022-10-23T12:58:41"
"2022-10-23T12:58:41"
NONE
null
## Describe the bug Progress bars after transformative operations turn in red and never be completed to 100% ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('rotten_tomatoes', split='test').filter(lambda o: True) ``` ## Expected results Progress bar should be 100% and green ## Actual results Progress bar turn in red and never completed to 100% ## Environment info - `datasets` version: 2.6.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.14 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5117/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5114
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5114/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5114/comments
https://api.github.com/repos/huggingface/datasets/issues/5114/events
https://github.com/huggingface/datasets/issues/5114
1,409,236,738
I_kwDODunzps5T_z8C
5,114
load_from_disk with remote filesystem fails due to a wrong temporary local folder path
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
"2022-10-14T11:54:53"
"2022-11-19T07:13:10"
null
NONE
null
## Describe the bug The function load_from_disk fails when using a remote filesystem because of a wrong temporary path generation in the load_from_disk method of arrow_dataset.py: ```python if is_remote_filesystem(fs): src_dataset_path = extract_path_from_uri(dataset_path) dataset_path = Dataset._build_local_temp_path(src_dataset_path) fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True) ``` If _dataset_path_ is `gs://speech/mydataset/train`, then _src_dataset_path_ will be `speech/mydataset/train` and _dataset_path_ will be something like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train` Then, after downloading the **folder** _src_dataset_path_, you will get a path like `/var/folders/9s/gf0b/T/tmp6t/speech/mydataset/train/train/state.json` (notice we have train twice) Instead of downloading the remote folder we should be downloading all the files in the folder for the path to be right: ```python fs.download(os.path.join(src_dataset_path,*), dataset_path.as_posix(), recursive=True) ``` ## Steps to reproduce the bug ```python fs = gcsfs.GCSFileSystem(**storage_options) dataset = load_from_disk("common_voice_processed") # loading local dataset previously saved locally, works fine dataset.save_to_disk(output_dir, fs=fs) #works fine dataset = load_from_disk(output_dir, fs=fs) # crashes ``` ## Expected results The dataset is loaded ## Actual results FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/9s/gf0b9jz15d517yrf7m3nvlxr0000gn/T/tmp6t5e221_/speech/datasets/tests/common_voice_processed/train/state.json' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.6.1.dev0 - Platform: mac os monterey 12.5.1 - Python version: 3.8.13 - PyArrow version:pyarrow==9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5114/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5114/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5112/comments
https://api.github.com/repos/huggingface/datasets/issues/5112/events
https://github.com/huggingface/datasets/issues/5112
1,409,143,409
I_kwDODunzps5T_dJx
5,112
Bug with filtered indices
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-10-14T10:35:47"
"2022-10-14T13:55:03"
"2022-10-14T12:11:45"
MEMBER
null
## Describe the bug As reported by @PartiallyTyped (and by @Muennighoff): - https://github.com/huggingface/datasets/issues/5111#issuecomment-1278652524 There is an issue with the indices of a filtered dataset. ## Steps to reproduce the bug ```python ds = Dataset.from_dict({"num": [0, 1, 2, 3]}) ds = ds.filter(lambda num: num % 2 == 0, input_columns="num", batch_size=2) assert all(item["num"] % 2 == 0 for item in ds) ``` ## Expected results The indices of the filtered dataset should correspond to the examples with "language" equals to "english". ## Actual results Indices to items with other languages are included in the filtered dataset indices ## Preliminar investigation It seems a bug introduced by: - #5030
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5112/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5112/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5111/comments
https://api.github.com/repos/huggingface/datasets/issues/5111/events
https://github.com/huggingface/datasets/issues/5111
1,408,143,170
I_kwDODunzps5T7o9C
5,111
map and filter not working properly in multiprocessing with the new release 2.6.0
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
14
"2022-10-13T17:00:55"
"2022-10-17T08:26:59"
"2022-10-14T14:59:59"
NONE
null
## Describe the bug When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5.2 In the code below the data is filtered differently when we increase `num_proc` used in `map` although the datsets before and after mapping have identical elements. ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset def preprocess(example): return example ds = load_dataset("codeparrot/codeparrot-clean-valid", split="train").select([i for i in range(10)]) ds1 = ds.map(preprocess, num_proc=2) ds2 = ds.map(preprocess) # the datasets elements are the same for i in range(len(ds1)): assert ds1[i]==ds2[i] print(f'Target column before filtering {ds1["autogenerated"]}') print(f'Target column before filtering {ds2["autogenerated"]}') print(f"datasets version {datasets.__version__}") ds_filtered_1 = ds1.filter(lambda x: not x["autogenerated"]) ds_filtered_2 = ds2.filter(lambda x: not x["autogenerated"]) # all elements in Target column are false so they should all be kept, but for ds2 only the first 5=num_samples/num_proc are kept print(ds_filtered_1) print(ds_filtered_2) ``` ``` Target column before filtering [False, False, False, False, False, False, False, False, False, False] Target column before filtering [False, False, False, False, False, False, False, False, False, False] Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 5 }) Dataset({ features: ['repo_name', 'path', 'copies', 'size', 'content', 'license', 'hash', 'line_mean', 'line_max', 'alpha_frac', 'autogenerated'], num_rows: 10 }) ``` ## Expected results Increasing `num_proc` in mapping shouldn't alter filtering. With the previous version 2.5.2 this doesn't happen ## Actual results Filtering doesn't work properly when we increase `num_proc` in mapping but not when calling `filter` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.6.0 - Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5111/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5111/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5109/comments
https://api.github.com/repos/huggingface/datasets/issues/5109/events
https://github.com/huggingface/datasets/issues/5109
1,407,434,706
I_kwDODunzps5T47_S
5,109
Map caching not working for some class methods
{ "login": "Mouhanedg56", "id": 23029765, "node_id": "MDQ6VXNlcjIzMDI5NzY1", "avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mouhanedg56", "html_url": "https://github.com/Mouhanedg56", "followers_url": "https://api.github.com/users/Mouhanedg56/followers", "following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}", "gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions", "organizations_url": "https://api.github.com/users/Mouhanedg56/orgs", "repos_url": "https://api.github.com/users/Mouhanedg56/repos", "events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}", "received_events_url": "https://api.github.com/users/Mouhanedg56/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2022-10-13T09:12:58"
"2022-10-17T10:38:45"
"2022-10-17T10:38:45"
CONTRIBUTOR
null
## Describe the bug The cache loading is not working as expected for some class methods with a model stored in an attribute. The new fingerprint for `_map_single` is not the same at each run. The hasher generate a different hash for the class method. This comes from `dumps` function in `datasets.utils.py_utils` which generates a different dump at each run. ## Steps to reproduce the bug ```python from datasets import load_dataset from transformers import AutoConfig, AutoModel, AutoTokenizer dataset = load_dataset("ethos", "binary") BASE_MODELNAME = "sentence-transformers/all-MiniLM-L6-v2" class Object: def __init__(self): config = AutoConfig.from_pretrained(BASE_MODELNAME) self.bert = AutoModel.from_config(config=config, add_pooling_layer=False) self.tok = AutoTokenizer.from_pretrained(BASE_MODELNAME) def tokenize(self, examples): tokenized_texts = self.tok( examples["text"], padding="max_length", truncation=True, max_length=256, ) return tokenized_texts instance = Object() result = dict() for phase in ["train"]: result[phase] = dataset[phase].map(instance.tokenize, batched=True, load_from_cache_file=True, num_proc=2) ``` ## Expected results Load cache instead of recompute result. ## Actual results Result recomputed from scratch at each run. The cache works fine when deleting `bert` attribute. ## Environment info - `datasets` version: 2.5.3.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.13 - PyArrow version: 7.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5109/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5105/comments
https://api.github.com/repos/huggingface/datasets/issues/5105/events
https://github.com/huggingface/datasets/issues/5105
1,406,078,357
I_kwDODunzps5Tzw2V
5,105
Specifying an exisiting folder in download_and_prepare deletes everything in it
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
5
"2022-10-12T11:53:33"
"2022-10-20T11:53:59"
null
CONTRIBUTOR
null
## Describe the bug The builder correctly creates the `output_dir` folder if it doesn't exist, but if the folder exists everything within it is deleted. Specifying `"."` as the `output_dir` deletes everything in your current dir but also leads to **another bug** whose traceback is the following: ``` Traceback (most recent call last) Input In [11], in <cell line: 1>() ----> 1 rotten_tomatoes_builder.download_and_prepare(output_dir=".", max_shard_size="200MB", file_format="parquet") File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:818, in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) File /usr/lib/python3.9/contextlib.py:124, in _GeneratorContextManager.__exit__(self, type, value, traceback) 122 if type is None: 123 try: --> 124 next(self.gen) 125 except StopIteration: 126 return False File ~/BIGSCIENCE/env/lib/python3.9/site-packages/datasets/builder.py:760, in incomplete_dir(dirname) File /usr/lib/python3.9/shutil.py:722, in rmtree(path, ignore_errors, onerror) 720 os.rmdir(path) 721 except OSError: --> 722 onerror(os.rmdir, path, sys.exc_info()) 723 else: 724 try: 725 # symlinks to directories are forbidden, see bug #1669 File /usr/lib/python3.9/shutil.py:720, in rmtree(path, ignore_errors, onerror) 718 _rmtree_safe_fd(fd, path, onerror) 719 try: --> 720 os.rmdir(path) 721 except OSError: 722 onerror(os.rmdir, path, sys.exc_info()) OSError: [Errno 22] Invalid argument: '/home/christopher/BIGSCIENCE/.' ``` ## Steps to reproduce the bug ```python rotten_tomatoes_builder = load_dataset_builder("rotten_tomatoes") rotten_tomatoes_builder.download_and_prepare(output_dir="./test_folder", max_shard_size="200MB", file_format="parquet") ``` If `test_folder` contains any files they will all be deleted ## Expected results Either a warning that all files will be deleted, but preferably that they not be deleted at all. ## Actual results N/A ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5105/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5102/comments
https://api.github.com/repos/huggingface/datasets/issues/5102/events
https://github.com/huggingface/datasets/issues/5102
1,404,746,554
I_kwDODunzps5Turs6
5,102
Error in create a dataset from a Python generator
{ "login": "yangxuhui", "id": 9004682, "node_id": "MDQ6VXNlcjkwMDQ2ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/9004682?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangxuhui", "html_url": "https://github.com/yangxuhui", "followers_url": "https://api.github.com/users/yangxuhui/followers", "following_url": "https://api.github.com/users/yangxuhui/following{/other_user}", "gists_url": "https://api.github.com/users/yangxuhui/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangxuhui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangxuhui/subscriptions", "organizations_url": "https://api.github.com/users/yangxuhui/orgs", "repos_url": "https://api.github.com/users/yangxuhui/repos", "events_url": "https://api.github.com/users/yangxuhui/events{/privacy}", "received_events_url": "https://api.github.com/users/yangxuhui/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-10-11T14:28:58"
"2022-10-12T11:31:56"
"2022-10-12T11:31:56"
NONE
null
## Describe the bug In HOW-TO-GUIDES > Load > [Python generator](https://huggingface.co/docs/datasets/v2.5.2/en/loading#python-generator), the code example defines the `my_gen` function, but when creating the dataset, an undefined `my_dict` is passed in. ```Python >>> from datasets import Dataset >>> def my_gen(): ... for i in range(1, 4): ... yield {"a": i} >>> dataset = Dataset.from_generator(my_dict) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5102/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5100/comments
https://api.github.com/repos/huggingface/datasets/issues/5100/events
https://github.com/huggingface/datasets/issues/5100
1,404,458,586
I_kwDODunzps5TtlZa
5,100
datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method
{ "login": "jagochi", "id": 115545475, "node_id": "U_kgDOBuMVgw", "avatar_url": "https://avatars.githubusercontent.com/u/115545475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jagochi", "html_url": "https://github.com/jagochi", "followers_url": "https://api.github.com/users/jagochi/followers", "following_url": "https://api.github.com/users/jagochi/following{/other_user}", "gists_url": "https://api.github.com/users/jagochi/gists{/gist_id}", "starred_url": "https://api.github.com/users/jagochi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jagochi/subscriptions", "organizations_url": "https://api.github.com/users/jagochi/orgs", "repos_url": "https://api.github.com/users/jagochi/repos", "events_url": "https://api.github.com/users/jagochi/events{/privacy}", "received_events_url": "https://api.github.com/users/jagochi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-10-11T11:16:31"
"2022-10-11T13:48:26"
"2022-10-11T13:48:26"
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5100/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5099/comments
https://api.github.com/repos/huggingface/datasets/issues/5099/events
https://github.com/huggingface/datasets/issues/5099
1,404,370,191
I_kwDODunzps5TtP0P
5,099
datasets doesn't support # in data paths
{ "login": "loubnabnl", "id": 44069155, "node_id": "MDQ6VXNlcjQ0MDY5MTU1", "avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loubnabnl", "html_url": "https://github.com/loubnabnl", "followers_url": "https://api.github.com/users/loubnabnl/followers", "following_url": "https://api.github.com/users/loubnabnl/following{/other_user}", "gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}", "starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions", "organizations_url": "https://api.github.com/users/loubnabnl/orgs", "repos_url": "https://api.github.com/users/loubnabnl/repos", "events_url": "https://api.github.com/users/loubnabnl/events{/privacy}", "received_events_url": "https://api.github.com/users/loubnabnl/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
9
"2022-10-11T10:05:32"
"2022-10-13T13:14:20"
"2022-10-13T13:14:20"
NONE
null
## Describe the bug dataset files with `#` symbol their paths aren't read correctly. ## Steps to reproduce the bug The data in folder `c#`of this [dataset](https://huggingface.co/datasets/loubnabnl/bigcode_csharp) can't be loaded. While the folder `c_sharp` with the same data is loaded properly ```python ds = load_dataset('loubnabnl/bigcode_csharp', split="train", data_files=["data/c#/*"]) ``` ``` FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/loubnabnl/bigcode_csharp/resolve/27a3166cff4bb18e11919cafa6f169c0f57483de/data/c#/data_0003.jsonl ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3 cc @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5099/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5098/comments
https://api.github.com/repos/huggingface/datasets/issues/5098/events
https://github.com/huggingface/datasets/issues/5098
1,404,058,518
I_kwDODunzps5TsDuW
5,098
Classes label error when loading symbolic links using imagefolder
{ "login": "horizon86", "id": 49552732, "node_id": "MDQ6VXNlcjQ5NTUyNzMy", "avatar_url": "https://avatars.githubusercontent.com/u/49552732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/horizon86", "html_url": "https://github.com/horizon86", "followers_url": "https://api.github.com/users/horizon86/followers", "following_url": "https://api.github.com/users/horizon86/following{/other_user}", "gists_url": "https://api.github.com/users/horizon86/gists{/gist_id}", "starred_url": "https://api.github.com/users/horizon86/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/horizon86/subscriptions", "organizations_url": "https://api.github.com/users/horizon86/orgs", "repos_url": "https://api.github.com/users/horizon86/repos", "events_url": "https://api.github.com/users/horizon86/events{/privacy}", "received_events_url": "https://api.github.com/users/horizon86/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-10-11T06:10:58"
"2022-11-14T14:40:20"
"2022-11-14T14:40:20"
NONE
null
**Is your feature request related to a problem? Please describe.** Like this: #4015 When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide whether to enable symbolic link tracking? This is inconsistent with the `torchvision.datasets.ImageFolder` behavior. For example: ![image](https://user-images.githubusercontent.com/49552732/195008591-3cce644e-aabe-4f39-90b9-832861cadb3d.png) ![image](https://user-images.githubusercontent.com/49552732/195008841-0b0c2289-eb7f-411a-977b-37426f23a277.png) It use `others` in green circle as class label but not `abnormal`, I wish `load_dataset` not use the real file parent as label. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context about the feature request here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5098/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5097/comments
https://api.github.com/repos/huggingface/datasets/issues/5097/events
https://github.com/huggingface/datasets/issues/5097
1,403,679,353
I_kwDODunzps5TqnJ5
5,097
Fatal error with pyarrow/libarrow.so
{ "login": "catalys1", "id": 11340846, "node_id": "MDQ6VXNlcjExMzQwODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/11340846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/catalys1", "html_url": "https://github.com/catalys1", "followers_url": "https://api.github.com/users/catalys1/followers", "following_url": "https://api.github.com/users/catalys1/following{/other_user}", "gists_url": "https://api.github.com/users/catalys1/gists{/gist_id}", "starred_url": "https://api.github.com/users/catalys1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/catalys1/subscriptions", "organizations_url": "https://api.github.com/users/catalys1/orgs", "repos_url": "https://api.github.com/users/catalys1/repos", "events_url": "https://api.github.com/users/catalys1/events{/privacy}", "received_events_url": "https://api.github.com/users/catalys1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
"2022-10-10T20:29:04"
"2022-10-11T06:56:01"
"2022-10-11T06:56:00"
NONE
null
## Describe the bug When using datasets, at the very end of my jobs the program crashes (see trace below). It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error. ## Steps to reproduce the bug This is sufficient to reproduce the problem: ```bash python -c "import datasets" ``` ## Expected results Program should run to completion without an error. ## Actual results ```bash Fatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS Exiting Application ################################################################################ Stack trace: ################################################################################ /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x150dff547f06] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x150dff53f8e5] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x150dff464e09] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x150dff462948] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x150dff41db46] /u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x150dfee8246a] /lib64/libc.so.6(+0x39b0c) [0x150e15eadb0c] /lib64/libc.so.6(on_exit+0) [0x150e15eadc40] /u/user/miniconda3/envs/env/bin/python(+0x28db18) [0x560ae370eb18] /u/user/miniconda3/envs/env/bin/python(+0x28db4b) [0x560ae370eb4b] /u/user/miniconda3/envs/env/bin/python(+0x28db90) [0x560ae370eb90] /u/user/miniconda3/envs/env/bin/python(_PyRun_SimpleFileObject+0x1e6) [0x560ae37123e6] /u/user/miniconda3/envs/env/bin/python(_PyRun_AnyFileObject+0x44) [0x560ae37124c4] /u/user/miniconda3/envs/env/bin/python(Py_RunMain+0x35d) [0x560ae37135bd] /u/user/miniconda3/envs/env/bin/python(Py_BytesMain+0x39) [0x560ae37137d9] /lib64/libc.so.6(__libc_start_main+0xf3) [0x150e15e97493] /u/user/miniconda3/envs/env/bin/python(+0x2125d4) [0x560ae36935d4] Aborted (core dumped) ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5097/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5096/comments
https://api.github.com/repos/huggingface/datasets/issues/5096/events
https://github.com/huggingface/datasets/issues/5096
1,403,379,816
I_kwDODunzps5TpeBo
5,096
Transfer some canonical datasets under an organization namespace
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-10-10T15:44:31"
"2023-01-18T16:30:12"
null
MEMBER
null
As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist). On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it). First, we should test it using a dummy dataset/organization. TODO: - [x] Test with a dummy dataset - [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset - [x] Create dummy organization: https://huggingface.co/dummy-canonical-org - [x] Transfer dummy canonical dataset to dummy organization - [ ] Transfer datasets - [ ] gem => GEM - [x] indonlu => indonlp - [ ] multilingual_librispeech => facebook - It already exists "facebook/multilingual_librispeech" - [ ] oscar => oscar-corpus - [x] qasper => allenai - [x] swiss_judgment_prediction => rcds - [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt - ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5096/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5094/comments
https://api.github.com/repos/huggingface/datasets/issues/5094/events
https://github.com/huggingface/datasets/issues/5094
1,403,214,950
I_kwDODunzps5To1xm
5,094
Multiprocessing with `Dataset.map` and `PyTorch` results in deadlock
{ "login": "RR-28023", "id": 36822895, "node_id": "MDQ6VXNlcjM2ODIyODk1", "avatar_url": "https://avatars.githubusercontent.com/u/36822895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RR-28023", "html_url": "https://github.com/RR-28023", "followers_url": "https://api.github.com/users/RR-28023/followers", "following_url": "https://api.github.com/users/RR-28023/following{/other_user}", "gists_url": "https://api.github.com/users/RR-28023/gists{/gist_id}", "starred_url": "https://api.github.com/users/RR-28023/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RR-28023/subscriptions", "organizations_url": "https://api.github.com/users/RR-28023/orgs", "repos_url": "https://api.github.com/users/RR-28023/repos", "events_url": "https://api.github.com/users/RR-28023/events{/privacy}", "received_events_url": "https://api.github.com/users/RR-28023/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
8
"2022-10-10T13:50:56"
"2022-10-18T16:18:53"
null
NONE
null
## Describe the bug There seems to be an issue with using multiprocessing with `datasets.Dataset.map` (i.e. setting `num_proc` to a value greater than one) combined with a function that uses `torch` under the hood. The subprocesses that `datasets.Dataset.map` spawns [a this step](https://github.com/huggingface/datasets/blob/1b935dab9d2f171a8c6294269421fe967eb55e34/src/datasets/arrow_dataset.py#L2663) go into wait mode forever. ## Steps to reproduce the bug The below code goes into deadlock when `NUMBER_OF_PROCESSES` is greater than one. ```python NUMBER_OF_PROCESSES = 2 from transformers import AutoTokenizer, AutoModel from datasets import load_dataset dataset = load_dataset("glue", "mrpc", split="train") tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2") model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2") model.to("cpu") def cls_pooling(model_output): return model_output.last_hidden_state[:, 0] def generate_embeddings_batched(examples): sentences_batch = list(examples['sentence1']) encoded_input = tokenizer( sentences_batch, padding=True, truncation=True, return_tensors="pt" ) encoded_input = {k: v.to("cpu") for k, v in encoded_input.items()} model_output = model(**encoded_input) embeddings = cls_pooling(model_output) examples['embeddings'] = embeddings.detach().cpu().numpy() # 64, 384 return examples embeddings_dataset = dataset.map( generate_embeddings_batched, batched=True, batch_size=10, num_proc=NUMBER_OF_PROCESSES ) ``` While debugging it I've seen that it gets "stuck" when calling `torch.nn.Embedding.forward` but some testing shows that the same happens with other functions from `torch.nn`. ## Environment info - Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.31 - Python version: 3.9.14 - PyArrow version: 9.0.0 - Pandas version: 1.5.0 Not sure if this is a HF problem, a PyTorch problem or something I'm doing wrong.. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5094/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5093/comments
https://api.github.com/repos/huggingface/datasets/issues/5093/events
https://github.com/huggingface/datasets/issues/5093
1,402,939,660
I_kwDODunzps5TnykM
5,093
Mismatch between tutoriel and doc
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-10-10T10:23:53"
"2022-10-10T17:51:15"
"2022-10-10T17:51:14"
CONTRIBUTOR
null
## Describe the bug In the "Process text data" tutorial, [`map` has `return_tensors` as kwarg](https://huggingface.co/docs/datasets/main/en/nlp_process#map). It does not seem to appear in the [function documentation](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map), nor to work. ## Steps to reproduce the bug MWE: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") from datasets import load_dataset dataset = load_dataset("lhoestq/demo1", split="train") dataset = dataset.map(lambda examples: tokenizer(examples["review"]), batched=True, return_tensors="pt") ``` ## Expected results return_tensors to be a valid kwarg :smiley: ## Actual results ```python >> TypeError: map() got an unexpected keyword argument 'return_tensors' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1052-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5093/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5090/comments
https://api.github.com/repos/huggingface/datasets/issues/5090/events
https://github.com/huggingface/datasets/issues/5090
1,401,102,407
I_kwDODunzps5TgyBH
5,090
Review sync issues from GitHub to Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-10-07T12:31:56"
"2022-10-08T07:07:36"
"2022-10-08T07:07:36"
MEMBER
null
## Describe the bug We have discovered that sometimes there were sync issues between GitHub and Hub datasets, after a merge commit to main branch. For example: - this merge commit: https://github.com/huggingface/datasets/commit/d74a9e8e4bfff1fed03a4cab99180a841d7caf4b - was not properly synced with the Hub: https://github.com/huggingface/datasets/actions/runs/3002495269/jobs/4819769684 ``` [main 9e641de] Add Papers with Code ID to scifact dataset (#4941) Author: Albert Villanova del Moral <albertvillanova@users.noreply.huggingface.co> 1 file changed, 42 insertions(+), 14 deletions(-) push failed ! GitCommandError(['git', 'push'], 1, b'remote: ---------------------------------------------------------- \nremote: Sorry, your push was rejected during YAML metadata verification: \nremote: - Error: "license" does not match any of the allowed types \nremote: ---------------------------------------------------------- \nremote: Please find the documentation at: \nremote: https://huggingface.co/docs/hub/models-cards#model-card-metadata \nremote: ---------------------------------------------------------- \nTo [https://huggingface.co/datasets/scifact.git\n](https://huggingface.co/datasets/scifact.git/n) ! [remote rejected] main -> main (pre-receive hook declined)\nerror: failed to push some refs to \'[https://huggingface.co/datasets/scifact.git\](https://huggingface.co/datasets/scifact.git/)'', b'') ``` We are reviewing sync issues in previous commits to recover them and repushing to the Hub. TODO: Review - [x] #4941 - scifact - [x] #4931 - scifact - [x] #4753 - wikipedia - [x] #4554 - wmt17, wmt19, wmt_t2t - Fixed with "Release 2.4.0" commit: https://github.com/huggingface/datasets/commit/401d4c4f9b9594cb6527c599c0e7a72ce1a0ea49 - https://huggingface.co/datasets/wmt17/commit/5c0afa83fbbd3508ff7627c07f1b27756d1379ea - https://huggingface.co/datasets/wmt19/commit/b8ad5bf1960208a376a0ab20bc8eac9638f7b400 - https://huggingface.co/datasets/wmt_t2t/commit/b6d67191804dd0933476fede36754a436b48d1fc - [x] #4607 - [x] #4416 - lccc - Fixed with "Release 2.3.0" commit: https://huggingface.co/datasets/lccc/commit/8b1f8cf425b5653a0a4357a53205aac82ce038d1 - [x] #4367
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5090/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5089/comments
https://api.github.com/repos/huggingface/datasets/issues/5089/events
https://github.com/huggingface/datasets/issues/5089
1,400,788,486
I_kwDODunzps5TflYG
5,089
Resume failed process
{ "login": "felix-schneider", "id": 208336, "node_id": "MDQ6VXNlcjIwODMzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felix-schneider", "html_url": "https://github.com/felix-schneider", "followers_url": "https://api.github.com/users/felix-schneider/followers", "following_url": "https://api.github.com/users/felix-schneider/following{/other_user}", "gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}", "starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions", "organizations_url": "https://api.github.com/users/felix-schneider/orgs", "repos_url": "https://api.github.com/users/felix-schneider/repos", "events_url": "https://api.github.com/users/felix-schneider/events{/privacy}", "received_events_url": "https://api.github.com/users/felix-schneider/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2022-10-07T08:07:03"
"2022-10-07T08:07:03"
null
NONE
null
**Is your feature request related to a problem? Please describe.** When a process (`map`, `filter`, etc.) crashes part-way through, you lose all progress. **Describe the solution you'd like** It would be good if the cache reflected the partial progress, so that after we restart the script, the process can restart where it left off. **Describe alternatives you've considered** Doing processing outside of `datasets`, by writing the dataset to json files and building a restart mechanism myself. **Additional context** N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5089/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5088/comments
https://api.github.com/repos/huggingface/datasets/issues/5088/events
https://github.com/huggingface/datasets/issues/5088
1,400,530,412
I_kwDODunzps5TemXs
5,088
load_datasets("json", ...) don't read local .json.gz properly
{ "login": "junwang-wish", "id": 112650299, "node_id": "U_kgDOBrboOw", "avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junwang-wish", "html_url": "https://github.com/junwang-wish", "followers_url": "https://api.github.com/users/junwang-wish/followers", "following_url": "https://api.github.com/users/junwang-wish/following{/other_user}", "gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}", "starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions", "organizations_url": "https://api.github.com/users/junwang-wish/orgs", "repos_url": "https://api.github.com/users/junwang-wish/repos", "events_url": "https://api.github.com/users/junwang-wish/events{/privacy}", "received_events_url": "https://api.github.com/users/junwang-wish/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
"2022-10-07T02:16:58"
"2022-10-07T14:43:16"
null
NONE
null
## Describe the bug I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines) ## Steps to reproduce the bug ```python fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz' ds_panda = DatasetDict( test=Dataset.from_pandas( pd.read_json(fpath, lines=True) ) ) ds_direct = load_dataset( 'json', data_files={ 'test': fpath }, features=Features( text_input=Value(dtype="string", id=None), text_output=Value(dtype="string", id=None) ) ) len(ds_panda['test']), len(ds_direct['test']) ``` ## Expected results Lines of `ds_panda['test']` and `ds_direct['test']` should match. ## Actual results ``` Using custom data configuration default-c0ef2598760968aa Downloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab... Dataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data. (62087, 0) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 18.04.4 LTS - Python version: 3.8.13 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5088/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5086/comments
https://api.github.com/repos/huggingface/datasets/issues/5086/events
https://github.com/huggingface/datasets/issues/5086
1,400,216,975
I_kwDODunzps5TdZ2P
5,086
HTTPError: 404 Client Error: Not Found for url
{ "login": "km5ar", "id": 54015474, "node_id": "MDQ6VXNlcjU0MDE1NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/54015474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/km5ar", "html_url": "https://github.com/km5ar", "followers_url": "https://api.github.com/users/km5ar/followers", "following_url": "https://api.github.com/users/km5ar/following{/other_user}", "gists_url": "https://api.github.com/users/km5ar/gists{/gist_id}", "starred_url": "https://api.github.com/users/km5ar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/km5ar/subscriptions", "organizations_url": "https://api.github.com/users/km5ar/orgs", "repos_url": "https://api.github.com/users/km5ar/repos", "events_url": "https://api.github.com/users/km5ar/events{/privacy}", "received_events_url": "https://api.github.com/users/km5ar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
3
"2022-10-06T19:48:58"
"2022-10-07T15:12:01"
"2022-10-07T15:12:01"
NONE
null
## Describe the bug I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf However, I'm not able to download the datasets, with a 404 erros <img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-4686-8631-13d879a0edee.png"> ## Steps to reproduce the bug ```python from huggingface_hub import hf_hub_url data_files = hf_hub_url( repo_id="lewtun/github-issues", filename="datasets-issues-with-hf-doc-builder.jsonl", repo_type="dataset", ) from datasets import load_dataset issues_dataset = load_dataset("json", data_files=data_files, split="train") issues_dataset ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.12 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5086/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5085/comments
https://api.github.com/repos/huggingface/datasets/issues/5085/events
https://github.com/huggingface/datasets/issues/5085
1,400,113,569
I_kwDODunzps5TdAmh
5,085
Filtering on an empty dataset returns a corrupted dataset.
{ "login": "gabegma", "id": 36087158, "node_id": "MDQ6VXNlcjM2MDg3MTU4", "avatar_url": "https://avatars.githubusercontent.com/u/36087158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gabegma", "html_url": "https://github.com/gabegma", "followers_url": "https://api.github.com/users/gabegma/followers", "following_url": "https://api.github.com/users/gabegma/following{/other_user}", "gists_url": "https://api.github.com/users/gabegma/gists{/gist_id}", "starred_url": "https://api.github.com/users/gabegma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gabegma/subscriptions", "organizations_url": "https://api.github.com/users/gabegma/orgs", "repos_url": "https://api.github.com/users/gabegma/repos", "events_url": "https://api.github.com/users/gabegma/events{/privacy}", "received_events_url": "https://api.github.com/users/gabegma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "Mouhanedg56", "id": 23029765, "node_id": "MDQ6VXNlcjIzMDI5NzY1", "avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mouhanedg56", "html_url": "https://github.com/Mouhanedg56", "followers_url": "https://api.github.com/users/Mouhanedg56/followers", "following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}", "gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions", "organizations_url": "https://api.github.com/users/Mouhanedg56/orgs", "repos_url": "https://api.github.com/users/Mouhanedg56/repos", "events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}", "received_events_url": "https://api.github.com/users/Mouhanedg56/received_events", "type": "User", "site_admin": false }
[ { "login": "Mouhanedg56", "id": 23029765, "node_id": "MDQ6VXNlcjIzMDI5NzY1", "avatar_url": "https://avatars.githubusercontent.com/u/23029765?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mouhanedg56", "html_url": "https://github.com/Mouhanedg56", "followers_url": "https://api.github.com/users/Mouhanedg56/followers", "following_url": "https://api.github.com/users/Mouhanedg56/following{/other_user}", "gists_url": "https://api.github.com/users/Mouhanedg56/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mouhanedg56/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mouhanedg56/subscriptions", "organizations_url": "https://api.github.com/users/Mouhanedg56/orgs", "repos_url": "https://api.github.com/users/Mouhanedg56/repos", "events_url": "https://api.github.com/users/Mouhanedg56/events{/privacy}", "received_events_url": "https://api.github.com/users/Mouhanedg56/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-10-06T18:18:49"
"2022-10-07T19:06:02"
"2022-10-07T18:40:26"
NONE
null
## Describe the bug When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted. ## Steps to reproduce the bug ```python datasets = load_dataset("glue", "sst2") dataset_split = datasets['validation'] ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset assert ds_filter_1.num_rows == 0 sentences = ds_filter_1['sentence'] assert len(sentences) == 0 ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition assert ds_filter_2.num_rows == 0 assert 'sentence' in ds_filter_2.column_names sentences = ds_filter_2['sentence'] ``` ## Expected results The last line should be returning an empty list, same as 4 lines above. ## Actual results The last line currently raises `IndexError: index out of bounds`. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.2 - Platform: macOS-11.6.6-x86_64-i386-64bit - Python version: 3.9.11 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5085/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5085/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5083/comments
https://api.github.com/repos/huggingface/datasets/issues/5083/events
https://github.com/huggingface/datasets/issues/5083
1,399,842,514
I_kwDODunzps5Tb-bS
5,083
Support numpy/torch/tf/jax formatting for IterableDataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
0
"2022-10-06T15:14:58"
"2022-10-06T15:42:27"
null
MEMBER
null
Right now `IterableDataset` doesn't do any formatting. Only the "torch" format can be used to make the dataset inherit from `torch.data.IterableDataset` and make it work with a torch DataLoader. In particular this code should return a numpy array: ```python from datasets import load_dataset ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np") print(next(iter(ds))["image"]) ``` Right now it returns a PIL.Image. Setting `streaming=False` does return a numpy array after #5072
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5083/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5081/comments
https://api.github.com/repos/huggingface/datasets/issues/5081/events
https://github.com/huggingface/datasets/issues/5081
1,399,340,050
I_kwDODunzps5TaDwS
5,081
Bug loading `sentence-transformers/parallel-sentences`
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
8
"2022-10-06T10:47:51"
"2022-10-11T10:00:48"
null
CONTRIBUTOR
null
## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sentence-transformers/parallel-sentences") ``` raises this: ``` /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) /home/phmay/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py:697: FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols' return pd.read_csv(xopen(filepath_or_buffer, "rb", use_auth_token=use_auth_token), **kwargs) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In [4], line 1 ----> 1 dataset = load_dataset("sentence-transformers/parallel-sentences", split="train") File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/load.py:1693, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1690 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1692 # Download and prepare data -> 1693 builder_instance.download_and_prepare( 1694 download_config=download_config, 1695 download_mode=download_mode, 1696 ignore_verifications=ignore_verifications, 1697 try_from_hf_gcs=try_from_hf_gcs, 1698 use_auth_token=use_auth_token, 1699 ) 1701 # Build dataset for splits 1702 keep_in_memory = ( 1703 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1704 ) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:807, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs) 801 if not downloaded_from_gcs: 802 prepare_split_kwargs = { 803 "file_format": file_format, 804 "max_shard_size": max_shard_size, 805 **download_and_prepare_kwargs, 806 } --> 807 self._download_and_prepare( 808 dl_manager=dl_manager, 809 verify_infos=verify_infos, 810 **prepare_split_kwargs, 811 **download_and_prepare_kwargs, 812 ) 813 # Sync info 814 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:898, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 894 split_dict.add(split_generator.split_info) 896 try: 897 # Prepare split will record examples associated to the split --> 898 self._prepare_split(split_generator, **prepare_split_kwargs) 899 except OSError as e: 900 raise OSError( 901 "Cannot find data file. " 902 + (self.manual_download_instructions or "") 903 + "\nOriginal error:\n" 904 + str(e) 905 ) from None File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/builder.py:1513, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, max_shard_size) 1506 shard_id += 1 1507 writer = writer_class( 1508 features=writer._features, 1509 path=fpath.replace("SSSSS", f"{shard_id:05d}"), 1510 storage_options=self._fs.storage_options, 1511 embed_local_files=embed_local_files, 1512 ) -> 1513 writer.write_table(table) 1514 finally: 1515 num_shards = shard_id + 1 File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/arrow_writer.py:540, in ArrowWriter.write_table(self, pa_table, writer_batch_size) 538 if self.pa_writer is None: 539 self._build_writer(inferred_schema=pa_table.schema) --> 540 pa_table = table_cast(pa_table, self._schema) 541 if self.embed_local_files: 542 pa_table = embed_table_storage(pa_table) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2044, in table_cast(table, schema) 2032 """Improved version of pa.Table.cast. 2033 2034 It supports casting to feature types stored in the schema metadata. (...) 2041 table (:obj:`pyarrow.Table`): the casted table 2042 """ 2043 if table.schema != schema: -> 2044 return cast_table_to_schema(table, schema) 2045 elif table.schema.metadata != schema.metadata: 2046 return table.replace_schema_metadata(schema.metadata) File ~/miniconda3/envs/paraphrase-mining/lib/python3.9/site-packages/datasets/table.py:2005, in cast_table_to_schema(table, schema) 2003 features = Features.from_arrow_schema(schema) 2004 if sorted(table.column_names) != sorted(features): -> 2005 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2006 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] 2007 return pa.Table.from_arrays(arrays, schema=schema) ValueError: Couldn't cast Action taken on Parliament's resolutions: see Minutes: string NΓ‘slednΓ½ postup na zΓ‘kladΔ› usnesenΓ­ Parlamentu: viz zΓ‘pis: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 742 to {'Membership of Parliament: see Minutes': Value(dtype='string', id=None), 'Π‘ΡŠΡΡ‚Π°Π² Π½Π° ΠŸΠ°Ρ€Π»Π°ΠΌΠ΅Π½Ρ‚Π°: Π²ΠΆ. ΠΏΡ€ΠΎΡ‚ΠΎΠΊΠΎΠ»ΠΈ': Value(dtype='string', id=None)} because column names don't match ``` ## Expected results no error ## Actual results error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.13 - PyArrow version: pyarrow 9.0.0 - transformers 4.22.2 - datasets 2.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5081/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5080/comments
https://api.github.com/repos/huggingface/datasets/issues/5080/events
https://github.com/huggingface/datasets/issues/5080
1,398,849,565
I_kwDODunzps5TYMAd
5,080
Use hfh for caching
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-10-06T05:51:58"
"2022-10-06T14:26:05"
null
MEMBER
null
## Is your feature request related to a problem? As previously discussed in our meeting with @Wauplin and agreed on our last datasets team sync meeting, I'm investigating how `datasets` can use `hfh` for caching. ## Describe the solution you'd like Due to the peculiarities of the `datasets` cache, I would propose adopting `hfh` caching system in stages. First, we could easily start using `hfh` caching for: - dataset Python scripts - dataset READMEs - dataset infos JSON files (now deprecated) Second, we could also use `hfh` caching for data files downloaded from the Hub. Further investigation is needed for: - files downloaded from non-Hub hosts - extracted files from downloaded archive/compressed files - generated Arrow files ## Additional context Docs about the `hfh` caching system: - [Manage huggingface_hub cache-system](https://huggingface.co/docs/huggingface_hub/main/en/how-to-cache) - [Cache-system reference](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/cache) The `transformers` library has already adopted `hfh` for caching. See: - huggingface/transformers#18438 - huggingface/transformers#18857 - huggingface/transformers#18966
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5080/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5075/comments
https://api.github.com/repos/huggingface/datasets/issues/5075/events
https://github.com/huggingface/datasets/issues/5075
1,397,865,501
I_kwDODunzps5TUbwd
5,075
Throw EnvironmentError when token is not present
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
null
[]
null
1
"2022-10-05T14:14:18"
"2022-10-07T14:33:28"
"2022-10-07T14:33:28"
CONTRIBUTOR
null
Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5075/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5074/comments
https://api.github.com/repos/huggingface/datasets/issues/5074/events
https://github.com/huggingface/datasets/issues/5074
1,397,850,352
I_kwDODunzps5TUYDw
5,074
Replace AssertionErrors with more meaningful errors
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "galbwe", "id": 20004072, "node_id": "MDQ6VXNlcjIwMDA0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/galbwe", "html_url": "https://github.com/galbwe", "followers_url": "https://api.github.com/users/galbwe/followers", "following_url": "https://api.github.com/users/galbwe/following{/other_user}", "gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}", "starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galbwe/subscriptions", "organizations_url": "https://api.github.com/users/galbwe/orgs", "repos_url": "https://api.github.com/users/galbwe/repos", "events_url": "https://api.github.com/users/galbwe/events{/privacy}", "received_events_url": "https://api.github.com/users/galbwe/received_events", "type": "User", "site_admin": false }
[ { "login": "galbwe", "id": 20004072, "node_id": "MDQ6VXNlcjIwMDA0MDcy", "avatar_url": "https://avatars.githubusercontent.com/u/20004072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/galbwe", "html_url": "https://github.com/galbwe", "followers_url": "https://api.github.com/users/galbwe/followers", "following_url": "https://api.github.com/users/galbwe/following{/other_user}", "gists_url": "https://api.github.com/users/galbwe/gists{/gist_id}", "starred_url": "https://api.github.com/users/galbwe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galbwe/subscriptions", "organizations_url": "https://api.github.com/users/galbwe/orgs", "repos_url": "https://api.github.com/users/galbwe/repos", "events_url": "https://api.github.com/users/galbwe/events{/privacy}", "received_events_url": "https://api.github.com/users/galbwe/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-10-05T14:03:55"
"2022-10-07T14:33:11"
"2022-10-07T14:33:11"
CONTRIBUTOR
null
Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc. The files with AssertionErrors that need to be replaced: ``` src/datasets/arrow_reader.py src/datasets/builder.py src/datasets/utils/version.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5074/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5070/comments
https://api.github.com/repos/huggingface/datasets/issues/5070/events
https://github.com/huggingface/datasets/issues/5070
1,396,765,647
I_kwDODunzps5TQPPP
5,070
Support default config name when no builder configs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-10-04T19:49:35"
"2022-10-06T14:40:26"
"2022-10-06T14:40:26"
MEMBER
null
**Is your feature request related to a problem? Please describe.** As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined. **Additional context** In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set. However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5070/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5070/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5061/comments
https://api.github.com/repos/huggingface/datasets/issues/5061/events
https://github.com/huggingface/datasets/issues/5061
1,395,476,770
I_kwDODunzps5TLUki
5,061
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map`
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
5
"2022-10-03T23:51:38"
"2022-10-14T16:44:54"
null
NONE
null
## Describe the bug When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`. ``` File "~/project/dataset.py", line 204, in <dictcomp> split: dataset.map( File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map transformed_shards[index] = async_result.get() File ".../site-packages/multiprocess/pool.py", line 771, in get raise self._value File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks put(task) File ".../site-packages/multiprocess/connection.py", line 214, in send self._send_bytes(_ForkingPickler.dumps(obj)) File ".../site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File ".../site-packages/dill/_dill.py", line 620, in dump StockPickler.dump(self, obj) File ".../pickle.py", line 487, in dump self.save(obj) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../pickle.py", line 902, in save_tuple save(element) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1963, in save_function _save_with_postproc(pickler, (_create_function, ( File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc pickler.save_reduce(*reduction, obj=obj) File ".../pickle.py", line 717, in save_reduce save(state) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../pickle.py", line 887, in save_tuple save(element) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict StockPickler.save_dict(pickler, obj) File ".../pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File ".../pickle.py", line 998, in _batch_setitems save(v) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1963, in save_function _save_with_postproc(pickler, (_create_function, ( File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc pickler.save_reduce(*reduction, obj=obj) File ".../pickle.py", line 717, in save_reduce save(state) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../pickle.py", line 887, in save_tuple save(element) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict StockPickler.save_dict(pickler, obj) File ".../pickle.py", line 972, in save_dict self._batch_setitems(obj.items()) File ".../pickle.py", line 998, in _batch_setitems save(v) File ".../pickle.py", line 560, in save f(self, obj) # Call unbound method with explicit self File ".../site-packages/dill/_dill.py", line 1963, in save_function _save_with_postproc(pickler, (_create_function, ( File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc pickler._batch_setitems(iter(source.items())) File ".../pickle.py", line 998, in _batch_setitems save(v) File ".../pickle.py", line 578, in save rv = reduce(self.proto) File ".../logging/__init__.py", line 1774, in __reduce__ raise pickle.PicklingError('logger cannot be pickled') _pickle.PicklingError: logger cannot be pickled ``` ## Steps to reproduce the bug Sorry I failed to have a minimal reproducible example, but the offending line on my end is ```python dataset.map( lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda batched=True, num_proc=4, ) ``` This does work when `num_proc=1`, so it's likely a multiprocessing thing. ## Expected results `map` succeeds ## Actual results The error trace above. ## Environment info - `datasets` version: 1.16.1 and 2.5.1 both failed - Platform: Ubuntu 20.04.4 LTS - Python version: 3.10.4 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5061/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5060/comments
https://api.github.com/repos/huggingface/datasets/issues/5060/events
https://github.com/huggingface/datasets/issues/5060
1,395,382,940
I_kwDODunzps5TK9qc
5,060
Unable to Use Custom Dataset Locally
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
4
"2022-10-03T21:55:16"
"2022-10-06T14:29:18"
"2022-10-06T14:29:17"
CONTRIBUTOR
null
## Describe the bug I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says ``` If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs. ``` Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs` However, if I try to load the data using `load_dataset`, I get the following error ``` with gzip.open(filepath, mode="rt") as f: File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open binary_file = GzipFile(filename, gz_mode, compresslevel) File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz' ``` ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True) >>> t = dataset["train"] >>> for item in t: ...... print(item) ...... break Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__ for key, example in self._iter(): File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter yield from ex_iterable File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples with gzip.open(filepath, mode="rt") as f: File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open binary_file = GzipFile(filename, gz_mode, compresslevel) File "/usr/local/Cellar/python@3.9/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__ fileobj = self.myfileobj = builtins.open(filename, mode or 'rb') FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz' ```` ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.1 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.9.7 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5060/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5053/comments
https://api.github.com/repos/huggingface/datasets/issues/5053/events
https://github.com/huggingface/datasets/issues/5053
1,393,739,882
I_kwDODunzps5TEshq
5,053
Intermittent JSON parse error when streaming the Pile
{ "login": "neelnanda-io", "id": 77788841, "node_id": "MDQ6VXNlcjc3Nzg4ODQx", "avatar_url": "https://avatars.githubusercontent.com/u/77788841?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neelnanda-io", "html_url": "https://github.com/neelnanda-io", "followers_url": "https://api.github.com/users/neelnanda-io/followers", "following_url": "https://api.github.com/users/neelnanda-io/following{/other_user}", "gists_url": "https://api.github.com/users/neelnanda-io/gists{/gist_id}", "starred_url": "https://api.github.com/users/neelnanda-io/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neelnanda-io/subscriptions", "organizations_url": "https://api.github.com/users/neelnanda-io/orgs", "repos_url": "https://api.github.com/users/neelnanda-io/repos", "events_url": "https://api.github.com/users/neelnanda-io/events{/privacy}", "received_events_url": "https://api.github.com/users/neelnanda-io/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
3
"2022-10-02T11:56:46"
"2022-10-04T17:59:03"
null
NONE
null
## Describe the bug I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash. This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tokens and 4 days into a training run, and now just happened 2 minutes into one, but I can't reliably reproduce it. I'm using a remote machine with 8 A6000 GPUs via runpod.io ## Expected results I have a DataLoader which can iterate through the whole Pile ## Actual results Stack trace: ``` FailedΒ toΒ readΒ fileΒ 'zstd://12.jsonl::https://the-eye.eu/public/AI/pile/train/12.jsonl.zst'Β withΒ errorΒ <classΒ 'pyarrow.lib.ArrowInvalid'>:Β JSONΒ parseΒ error:Β InvalidΒ value.Β inΒ rowΒ 0 ``` I'm currently using HuggingFace accelerate, which also gave me the following stack trace, but I've also experienced this problem intermittently when using DataParallel, so I don't think it's to do with parallelisation ``` Traceback (most recent call last): File "ddp_script.py", line 1258, in <module> main() File "ddp_script.py", line 1143, in main for c, batch in tqdm.tqdm(enumerate(data_iter)): File "/opt/conda/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 503, in __iter__ next_batch, next_batch_info, next_skip = self._fetch_batches(main_iterator) File "/opt/conda/lib/python3.7/site-packages/accelerate/data_loader.py", line 454, in _fetch_batches broadcast_object_list(batch_info) File "/opt/conda/lib/python3.7/site-packages/accelerate/utils/operations.py", line 333, in broadcast_object_list torch.distributed.broadcast_object_list(object_list, src=from_process) File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1900, in broadcast_object_list object_list[i] = _tensor_to_object(obj_view, obj_size) File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1571, in _tensor_to_object return _unpickler(io.BytesIO(buf)).load() _pickle.UnpicklingError: invalid load key, '@'. ``` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset( cfg["dataset_name"], streaming=True, split="train") dataset = dataset.remove_columns("meta") dataset = dataset.map(tokenize_and_concatenate, batched=True) dataset = dataset.with_format(type="torch") train_data_loader = DataLoader( dataset, batch_size=cfg["batch_size"], num_workers=3) for batch in train_data_loader: continue ``` `tokenize_and_concatenate` is a custom tokenization function I defined on the GPT-NeoX tokenizer to tokenize the text, separated by endoftext tokens, and reshape to have length batch_size, I don't think this is related to tokenization: ``` import numpy as np import einops import torch def tokenize_and_concatenate(examples): texts = examples["text"] full_text = tokenizer.eos_token.join(texts) div = 20 length = len(full_text) // div text_list = [full_text[i * length: (i + 1) * length] for i in range(div)] tokens = tokenizer(text_list, return_tensors="np", padding=True)[ "input_ids" ].flatten() tokens = tokens[tokens != tokenizer.pad_token_id] n = len(tokens) curr_batch_size = n // (seq_len - 1) tokens = tokens[: (seq_len - 1) * curr_batch_size] tokens = einops.rearrange( tokens, "(batch_size seq) -> batch_size seq", batch_size=curr_batch_size, seq=seq_len - 1, ) prefix = np.ones((curr_batch_size, 1), dtype=np.int64) * \ tokenizer.bos_token_id return { "text": np.concatenate([prefix, tokens], axis=1) } ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-105-generic-x86_64-with-debian-buster-sid - Python version: 3.7.13 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 ZStandard data: Version: 0.18.0 Summary: Zstandard bindings for Python Home-page: https://github.com/indygreg/python-zstandard Author: Gregory Szorc Author-email: gregory.szorc@gmail.com License: BSD Location: /opt/conda/lib/python3.7/site-packages Requires: Required-by:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5053/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5050/comments
https://api.github.com/repos/huggingface/datasets/issues/5050/events
https://github.com/huggingface/datasets/issues/5050
1,392,381,882
I_kwDODunzps5S_g-6
5,050
Restore saved format state in `load_from_disk`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false }
[ { "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-09-30T12:40:07"
"2022-10-11T16:49:24"
"2022-10-11T16:49:24"
CONTRIBUTOR
null
Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that. Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5050/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5046/comments
https://api.github.com/repos/huggingface/datasets/issues/5046/events
https://github.com/huggingface/datasets/issues/5046
1,391,372,519
I_kwDODunzps5S7qjn
5,046
Audiofolder creates empty Dataset if files same level as metadata
{ "login": "msis", "id": 577139, "node_id": "MDQ6VXNlcjU3NzEzOQ==", "avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/msis", "html_url": "https://github.com/msis", "followers_url": "https://api.github.com/users/msis/followers", "following_url": "https://api.github.com/users/msis/following{/other_user}", "gists_url": "https://api.github.com/users/msis/gists{/gist_id}", "starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/msis/subscriptions", "organizations_url": "https://api.github.com/users/msis/orgs", "repos_url": "https://api.github.com/users/msis/repos", "events_url": "https://api.github.com/users/msis/events{/privacy}", "received_events_url": "https://api.github.com/users/msis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_id": "LA_kwDODunzps8AAAABEwvm4Q", "url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest", "name": "hacktoberfest", "color": "DF8D62", "default": false, "description": "" } ]
closed
false
{ "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false }
[ { "login": "riccardobucco", "id": 9295277, "node_id": "MDQ6VXNlcjkyOTUyNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riccardobucco", "html_url": "https://github.com/riccardobucco", "followers_url": "https://api.github.com/users/riccardobucco/followers", "following_url": "https://api.github.com/users/riccardobucco/following{/other_user}", "gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}", "starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions", "organizations_url": "https://api.github.com/users/riccardobucco/orgs", "repos_url": "https://api.github.com/users/riccardobucco/repos", "events_url": "https://api.github.com/users/riccardobucco/events{/privacy}", "received_events_url": "https://api.github.com/users/riccardobucco/received_events", "type": "User", "site_admin": false } ]
null
5
"2022-09-29T19:17:23"
"2022-10-28T13:05:07"
"2022-10-28T13:05:07"
NONE
null
## Describe the bug When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns. https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88 ## Steps to reproduce the bug `metadata.csv`: ```csv file_name,duration,transcription ./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello ``` ```python >>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/") >>> audio_dataset DatasetDict({ train: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) validation: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) }) ``` I've tried, with no success,: - setting `split` to something else so I don't get a `DatasetDict`, - removing the `./`, - using `.jsonl`. ## Expected results ``` Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 1 }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) validation: Dataset({ features: ['audio', 'duration', 'transcription'], num_rows: 0 }) }) ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5046/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5045/comments
https://api.github.com/repos/huggingface/datasets/issues/5045/events
https://github.com/huggingface/datasets/issues/5045
1,391,287,609
I_kwDODunzps5S7V05
5,045
Automatically revert to last successful commit to hub when a push_to_hub is interrupted
{ "login": "jorahn", "id": 13120204, "node_id": "MDQ6VXNlcjEzMTIwMjA0", "avatar_url": "https://avatars.githubusercontent.com/u/13120204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jorahn", "html_url": "https://github.com/jorahn", "followers_url": "https://api.github.com/users/jorahn/followers", "following_url": "https://api.github.com/users/jorahn/following{/other_user}", "gists_url": "https://api.github.com/users/jorahn/gists{/gist_id}", "starred_url": "https://api.github.com/users/jorahn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jorahn/subscriptions", "organizations_url": "https://api.github.com/users/jorahn/orgs", "repos_url": "https://api.github.com/users/jorahn/repos", "events_url": "https://api.github.com/users/jorahn/events{/privacy}", "received_events_url": "https://api.github.com/users/jorahn/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
5
"2022-09-29T18:08:12"
"2022-09-30T16:49:21"
null
NONE
null
**Is your feature request related to a problem? Please describe.** I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names don’t match). Only by specifying the previous (complete) commit as revision=commit_hash in load_data(), I was able to repair this and after a successful, complete push, the dataset loads without error again. **Describe the solution you'd like** Would it make sense to detect an incomplete push_to_hub() and automatically revert to the previous commit/revision? **Describe alternatives you've considered** Leave everything as is, the revision parameter in load_dataset() allows to manually fix this problem. **Additional context** Provide useful defaults
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5045/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5045/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5044/comments
https://api.github.com/repos/huggingface/datasets/issues/5044/events
https://github.com/huggingface/datasets/issues/5044
1,391,242,908
I_kwDODunzps5S7K6c
5,044
integrate `load_from_disk` into `load_dataset`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
4
"2022-09-29T17:37:12"
"2022-09-30T16:59:19"
null
MEMBER
null
**Is your feature request related to a problem? Please describe.** Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types? Currently one has to choose a different loader depending on how the dataset has been created. e.g. this won't work: ``` $ git clone https://huggingface.co/datasets/severo/test-parquet $ python -c 'from datasets import load_dataset; ds=load_dataset("test-parquet"); \ ds.save_to_disk("my_dataset"); load_dataset("my_dataset")' [...] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/builder.py", line 1277, in _prepare_split writer.write_table(table) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/arrow_writer.py", line 524, in write_table pa_table = table_cast(pa_table, self._schema) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 2005, in table_cast return cast_table_to_schema(table, schema) File "/home/stas/anaconda3/envs/py38-pt112/lib/python3.8/site-packages/datasets/table.py", line 1968, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string ``` both times the dataset is being loaded from disk. Why does it fail the second time? Why can't `save_to_disk` generate a dataset that can be immediately loaded by `load_dataset`? e.g. the simplest hack would be to have `save_to_disk` add some flag to the saved dataset, that tells `load_dataset` to internally call `load_from_disk`. like having `save_to_disk` create a `load_me_with_load_from_disk.txt` file ;) and `load_dataset` will support that feature from saved datasets from new `datasets` versions. The old ones will still need to use `load_from_disk` explicitly. Unless the flag is not needed and one can immediately tell by looking at the saved dataset that it was saved via `save_to_disk` and thus use `load_from_disk` internally. The use-case is defining a simple API where the user only ever needs to pass a `dataset_name_or_path` and it will always just work. Currently one needs to manually add additional switches telling the system whether to use one loading method or the other which works but it's not smooth. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5044/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5039/comments
https://api.github.com/repos/huggingface/datasets/issues/5039/events
https://github.com/huggingface/datasets/issues/5039
1,390,353,315
I_kwDODunzps5S3xuj
5,039
Hendrycks Checksum
{ "login": "DanielHesslow", "id": 9974388, "node_id": "MDQ6VXNlcjk5NzQzODg=", "avatar_url": "https://avatars.githubusercontent.com/u/9974388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DanielHesslow", "html_url": "https://github.com/DanielHesslow", "followers_url": "https://api.github.com/users/DanielHesslow/followers", "following_url": "https://api.github.com/users/DanielHesslow/following{/other_user}", "gists_url": "https://api.github.com/users/DanielHesslow/gists{/gist_id}", "starred_url": "https://api.github.com/users/DanielHesslow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DanielHesslow/subscriptions", "organizations_url": "https://api.github.com/users/DanielHesslow/orgs", "repos_url": "https://api.github.com/users/DanielHesslow/repos", "events_url": "https://api.github.com/users/DanielHesslow/events{/privacy}", "received_events_url": "https://api.github.com/users/DanielHesslow/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-09-29T06:56:20"
"2022-09-29T10:23:30"
"2022-09-29T10:04:20"
NONE
null
Hi, The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote. ``` datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://people.eecs.berkeley.edu/~hendrycks/data.tar'] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5039/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5038/comments
https://api.github.com/repos/huggingface/datasets/issues/5038/events
https://github.com/huggingface/datasets/issues/5038
1,389,631,122
I_kwDODunzps5S1BaS
5,038
`Dataset.unique` showing wrong output after filtering
{ "login": "mxschmdt", "id": 4904985, "node_id": "MDQ6VXNlcjQ5MDQ5ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/4904985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mxschmdt", "html_url": "https://github.com/mxschmdt", "followers_url": "https://api.github.com/users/mxschmdt/followers", "following_url": "https://api.github.com/users/mxschmdt/following{/other_user}", "gists_url": "https://api.github.com/users/mxschmdt/gists{/gist_id}", "starred_url": "https://api.github.com/users/mxschmdt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mxschmdt/subscriptions", "organizations_url": "https://api.github.com/users/mxschmdt/orgs", "repos_url": "https://api.github.com/users/mxschmdt/repos", "events_url": "https://api.github.com/users/mxschmdt/events{/privacy}", "received_events_url": "https://api.github.com/users/mxschmdt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2022-09-28T16:20:35"
"2022-09-30T15:44:25"
"2022-09-30T15:44:25"
CONTRIBUTOR
null
## Describe the bug After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset. ## Steps to reproduce the bug ```python from datasets import Dataset dataset = Dataset.from_dict({'id': [0]}) dataset = dataset.filter(lambda _: False) print(dataset.unique('id')) ``` ## Expected results The above code should return an empty list since the dataset is empty. ## Actual results ```bash [0] ``` ## Environment info - `datasets` version: 2.5.1 - Platform: Linux-5.18.19-100.fc35.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.14 - PyArrow version: 7.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5038/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5032/comments
https://api.github.com/repos/huggingface/datasets/issues/5032/events
https://github.com/huggingface/datasets/issues/5032
1,388,270,935
I_kwDODunzps5Sv1VX
5,032
new dataset type: single-label and multi-label video classification
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
6
"2022-09-27T19:40:11"
"2022-11-02T19:10:13"
null
NONE
null
**Is your feature request related to a problem? Please describe.** In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset. **Describe the solution you'd like** Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model. **Describe alternatives you've considered** Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative. **Additional context** I am wiling to open a PR but don't know where to start.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5032/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5032/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5028/comments
https://api.github.com/repos/huggingface/datasets/issues/5028/events
https://github.com/huggingface/datasets/issues/5028
1,386,272,533
I_kwDODunzps5SoNcV
5,028
passing parameters to the method passed to Dataset.from_generator()
{ "login": "Basir-mahmood", "id": 64276129, "node_id": "MDQ6VXNlcjY0Mjc2MTI5", "avatar_url": "https://avatars.githubusercontent.com/u/64276129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Basir-mahmood", "html_url": "https://github.com/Basir-mahmood", "followers_url": "https://api.github.com/users/Basir-mahmood/followers", "following_url": "https://api.github.com/users/Basir-mahmood/following{/other_user}", "gists_url": "https://api.github.com/users/Basir-mahmood/gists{/gist_id}", "starred_url": "https://api.github.com/users/Basir-mahmood/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Basir-mahmood/subscriptions", "organizations_url": "https://api.github.com/users/Basir-mahmood/orgs", "repos_url": "https://api.github.com/users/Basir-mahmood/repos", "events_url": "https://api.github.com/users/Basir-mahmood/events{/privacy}", "received_events_url": "https://api.github.com/users/Basir-mahmood/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
1
"2022-09-26T15:20:06"
"2022-10-03T13:00:00"
"2022-10-03T13:00:00"
NONE
null
Big thanks for providing dataset creation via a generator. I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows. ``` from datasets import Dataset def gen(param1): for idx in len(custom_dataset): yield custom_dataset[idx] + param1 ds = Dataset.from_generator(gen(param1)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5028/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5025/comments
https://api.github.com/repos/huggingface/datasets/issues/5025/events
https://github.com/huggingface/datasets/issues/5025
1,386,011,239
I_kwDODunzps5SnNpn
5,025
Custom Json Dataset Throwing Error when batch is False
{ "login": "jmandivarapu1", "id": 21245519, "node_id": "MDQ6VXNlcjIxMjQ1NTE5", "avatar_url": "https://avatars.githubusercontent.com/u/21245519?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmandivarapu1", "html_url": "https://github.com/jmandivarapu1", "followers_url": "https://api.github.com/users/jmandivarapu1/followers", "following_url": "https://api.github.com/users/jmandivarapu1/following{/other_user}", "gists_url": "https://api.github.com/users/jmandivarapu1/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmandivarapu1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmandivarapu1/subscriptions", "organizations_url": "https://api.github.com/users/jmandivarapu1/orgs", "repos_url": "https://api.github.com/users/jmandivarapu1/repos", "events_url": "https://api.github.com/users/jmandivarapu1/events{/privacy}", "received_events_url": "https://api.github.com/users/jmandivarapu1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2022-09-26T12:38:39"
"2022-09-27T19:50:00"
"2022-09-27T19:50:00"
NONE
null
## Describe the bug A clear and concise description of what the bug is. I tried to create my custom dataset using below code ``` from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D from torchvision import transforms from transformers import AutoProcessor # we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes, # based on the checkpoint we provide from the hub from datasets import load_dataset def prepare_examples(examples): #Some preporcessing for each image and text as all my data saved in cloud #For this reason I couldn't set the batch to True. encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels, truncation=True, padding="max_length") # encoding['pixel_values']=np.array(encoding['pixel_values']) return encoding dataset = load_dataset("json", data_files='issues.jsonl') processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) features = dataset["train"].features column_names = dataset["train"].column_names # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) train_dataset = dataset["train"].map( prepare_examples, batched=False, remove_columns=column_names, features=features ) ``` It throws below error. ``` /opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 172 storage = to_pyarrow_listarray(data, pa_type) --> 173 return pa.ExtensionArray.from_storage(pa_type, storage) 174 /opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage() TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>> ``` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D from torchvision import transforms from transformers import AutoProcessor # we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes, # based on the checkpoint we provide from the hub from datasets import load_dataset def prepare_examples(examples): #Some preporcessing for each image and text as all my data saved in cloud encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels, truncation=True, padding="max_length") # encoding['pixel_values']=np.array(encoding['pixel_values']) return encoding dataset = load_dataset("json", data_files='issues.jsonl') processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) features = dataset["train"].features column_names = dataset["train"].column_names # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) train_dataset = dataset["train"].map( prepare_examples, batched=False, remove_columns=column_names, features=features ) ## Expected results A clear and concise description of the expected results. Expected would be similar to all the otherdatasets with no error. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Unix - Python version: 3.9 - PyArrow version: 9.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5025/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5023/comments
https://api.github.com/repos/huggingface/datasets/issues/5023/events
https://github.com/huggingface/datasets/issues/5023
1,385,881,112
I_kwDODunzps5Smt4Y
5,023
Text strings are split into lists of characters in xcsr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2022-09-26T11:11:50"
"2022-09-28T07:54:20"
"2022-09-28T07:54:20"
MEMBER
null
## Describe the bug Text strings are split into lists of characters. Example for "X-CSQA-en": ``` {'id': 'd3845adc08414fda', 'lang': 'en', 'question': {'stem': ['T', 'h', 'e', ' ', 'd', 'e', 'n', 't', 'a', 'l', ' ', 'o', 'f', 'f', 'i', 'c', 'e', ' ', 'h', 'a', 'n', 'd', 'l', 'e', 'd', ' ', 'a', ' ', 'l', 'o', 't', ' ', 'o', 'f', ' ', 'p', 'a', 't', 'i', 'e', 'n', 't', 's', ' ', 'w', 'h', 'o', ' ', 'e', 'x', 'p', 'e', 'r', 'i', 'e', 'n', 'c', 'e', 'd', ' ', 't', 'r', 'a', 'u', 'm', 'a', 't', 'i', 'c', ' ', 'm', 'o', 'u', 't', 'h', ' ', 'i', 'n', 'j', 'u', 'r', 'y', ',', ' ', 'w', 'h', 'e', 'r', 'e', ' ', 'w', 'e', 'r', 'e', ' ', 't', 'h', 'e', 's', 'e', ' ', 'p', 'a', 't', 'i', 'e', 'n', 't', 's', ' ', 'c', 'o', 'm', 'i', 'n', 'g', ' ', 'f', 'r', 'o', 'm', '?'], 'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']}, {'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']}, {'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']}, {'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']}, {'label': ['E'], 'text': ['o', 'f', 'f', 'i', 'c', 'e', ' ', 'b', 'u', 'i', 'l', 'd', 'i', 'n', 'g']}]}, 'answerKey': 'C'} ## Steps to reproduce the bug ```python ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True) item = next(iter(ds)) item ``` ## Expected results ``` {'id': 'd3845adc08414fda', 'lang': 'en', 'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?', 'choices': {'label': ['A', 'B', 'C', 'D', 'E'], 'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}}, 'answerKey': 'C'} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5023/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5021/comments
https://api.github.com/repos/huggingface/datasets/issues/5021/events
https://github.com/huggingface/datasets/issues/5021
1,385,351,250
I_kwDODunzps5SkshS
5,021
Split is inferred from filename and overrides metadata.jsonl
{ "login": "float-trip", "id": 102226344, "node_id": "U_kgDOBhfZqA", "avatar_url": "https://avatars.githubusercontent.com/u/102226344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/float-trip", "html_url": "https://github.com/float-trip", "followers_url": "https://api.github.com/users/float-trip/followers", "following_url": "https://api.github.com/users/float-trip/following{/other_user}", "gists_url": "https://api.github.com/users/float-trip/gists{/gist_id}", "starred_url": "https://api.github.com/users/float-trip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/float-trip/subscriptions", "organizations_url": "https://api.github.com/users/float-trip/orgs", "repos_url": "https://api.github.com/users/float-trip/repos", "events_url": "https://api.github.com/users/float-trip/events{/privacy}", "received_events_url": "https://api.github.com/users/float-trip/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
3
"2022-09-26T03:22:14"
"2022-09-29T08:07:50"
"2022-09-29T08:07:50"
NONE
null
## Describe the bug Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files. This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder ## Steps to reproduce the bug `metadata.jsonl` ```json {"file_name": "photo of a cat.jpg", "text": "a photo of a cat"} {"file_name": "photo of a dog.jpg", "text": "a photo of a dog"} {"file_name": "photo of a train.jpg", "text": "a photo of a train"} {"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"} ``` `bug.py` ```python from datasets import load_dataset dataset = load_dataset("dataset") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['image', 'text'], # num_rows: 1 # }) # test: Dataset({ # features: ['image', 'text'], # num_rows: 1 # }) # }) for split in dataset: for n in dataset[split]: print(n['text']) # a photo of a train # a photo of test tubes ``` ## Expected results One single dataset with all four images / a warning for unused files / documentation of this behavior ## Actual results Only the images with "test" or "train" in the name are loaded ## Environment info - `datasets` version: 2.5.1 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5021/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5017/comments
https://api.github.com/repos/huggingface/datasets/issues/5017/events
https://github.com/huggingface/datasets/issues/5017
1,384,022,463
I_kwDODunzps5SfoG_
5,017
xcsr: X-CSQA simply uses english for all alleged non-english data
{ "login": "thesofakillers", "id": 26286291, "node_id": "MDQ6VXNlcjI2Mjg2Mjkx", "avatar_url": "https://avatars.githubusercontent.com/u/26286291?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thesofakillers", "html_url": "https://github.com/thesofakillers", "followers_url": "https://api.github.com/users/thesofakillers/followers", "following_url": "https://api.github.com/users/thesofakillers/following{/other_user}", "gists_url": "https://api.github.com/users/thesofakillers/gists{/gist_id}", "starred_url": "https://api.github.com/users/thesofakillers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thesofakillers/subscriptions", "organizations_url": "https://api.github.com/users/thesofakillers/orgs", "repos_url": "https://api.github.com/users/thesofakillers/repos", "events_url": "https://api.github.com/users/thesofakillers/events{/privacy}", "received_events_url": "https://api.github.com/users/thesofakillers/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-09-23T16:11:54"
"2022-09-26T10:57:31"
"2022-09-26T10:57:31"
NONE
null
## Describe the bug All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description: > we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR ## Steps to reproduce the bug ```python # let's say you want to load the french X-CSQA subcollection french = datasets.load_dataset("xcsr", "X-CSQA-fr") # for good measure, let's load english too english = datasets.load_dataset("xcsr", "X-CSQA-en") # let's inspect "".join(english['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' "".join(french['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' # what? Why are they both in english? # I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset # maybe i need to look better? french['test'].unique('lang') # output: ['en'] # no, it's all english ``` ## Expected results Accessing a subcollection in language X should return a subcollection containg samples in language X ## Actual results Accessing a subcollection in language X returns a subcollection containing samples in English. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5017/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5015/comments
https://api.github.com/repos/huggingface/datasets/issues/5015/events
https://github.com/huggingface/datasets/issues/5015
1,383,485,558
I_kwDODunzps5SdlB2
5,015
Transfer dataset scripts to Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-09-23T08:48:10"
"2022-10-05T07:15:57"
"2022-10-05T07:15:57"
MEMBER
null
Before merging: - #4974 TODO: - [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22) - [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/) - [x] PRs: - [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub - [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub - [ ] Issues Finally: - [x] #4974 Let me know what you think! :hugs:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5015/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5015/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5014/comments
https://api.github.com/repos/huggingface/datasets/issues/5014/events
https://github.com/huggingface/datasets/issues/5014
1,383,422,639
I_kwDODunzps5SdVqv
5,014
I need to read the custom dataset in conll format
{ "login": "506610466", "id": 39985245, "node_id": "MDQ6VXNlcjM5OTg1MjQ1", "avatar_url": "https://avatars.githubusercontent.com/u/39985245?v=4", "gravatar_id": "", "url": "https://api.github.com/users/506610466", "html_url": "https://github.com/506610466", "followers_url": "https://api.github.com/users/506610466/followers", "following_url": "https://api.github.com/users/506610466/following{/other_user}", "gists_url": "https://api.github.com/users/506610466/gists{/gist_id}", "starred_url": "https://api.github.com/users/506610466/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/506610466/subscriptions", "organizations_url": "https://api.github.com/users/506610466/orgs", "repos_url": "https://api.github.com/users/506610466/repos", "events_url": "https://api.github.com/users/506610466/events{/privacy}", "received_events_url": "https://api.github.com/users/506610466/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
3
"2022-09-23T07:49:42"
"2022-11-02T11:57:15"
null
NONE
null
I need to read the custom dataset in conll format
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5014/timeline
null
reopened
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5013/comments
https://api.github.com/repos/huggingface/datasets/issues/5013/events
https://github.com/huggingface/datasets/issues/5013
1,383,415,971
I_kwDODunzps5SdUCj
5,013
would huggingface like publish cpp binding for datasets package ?
{ "login": "mullerhai", "id": 6143404, "node_id": "MDQ6VXNlcjYxNDM0MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6143404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mullerhai", "html_url": "https://github.com/mullerhai", "followers_url": "https://api.github.com/users/mullerhai/followers", "following_url": "https://api.github.com/users/mullerhai/following{/other_user}", "gists_url": "https://api.github.com/users/mullerhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/mullerhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mullerhai/subscriptions", "organizations_url": "https://api.github.com/users/mullerhai/orgs", "repos_url": "https://api.github.com/users/mullerhai/repos", "events_url": "https://api.github.com/users/mullerhai/events{/privacy}", "received_events_url": "https://api.github.com/users/mullerhai/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
5
"2022-09-23T07:42:49"
"2022-09-27T03:40:30"
null
NONE
null
HI: I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it. thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5013/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5012/comments
https://api.github.com/repos/huggingface/datasets/issues/5012/events
https://github.com/huggingface/datasets/issues/5012
1,382,851,096
I_kwDODunzps5SbKIY
5,012
Force JSON format regardless of file naming on S3
{ "login": "junwang-wish", "id": 112650299, "node_id": "U_kgDOBrboOw", "avatar_url": "https://avatars.githubusercontent.com/u/112650299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/junwang-wish", "html_url": "https://github.com/junwang-wish", "followers_url": "https://api.github.com/users/junwang-wish/followers", "following_url": "https://api.github.com/users/junwang-wish/following{/other_user}", "gists_url": "https://api.github.com/users/junwang-wish/gists{/gist_id}", "starred_url": "https://api.github.com/users/junwang-wish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/junwang-wish/subscriptions", "organizations_url": "https://api.github.com/users/junwang-wish/orgs", "repos_url": "https://api.github.com/users/junwang-wish/repos", "events_url": "https://api.github.com/users/junwang-wish/events{/privacy}", "received_events_url": "https://api.github.com/users/junwang-wish/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2022-09-22T18:28:15"
"2022-09-26T09:31:38"
null
NONE
null
I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run ```python dataset = load_dataset( "json", data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2' ) ``` It gives me ``` InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2' ``` However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5012/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5011/comments
https://api.github.com/repos/huggingface/datasets/issues/5011/events
https://github.com/huggingface/datasets/issues/5011
1,382,609,587
I_kwDODunzps5SaPKz
5,011
Audio: `encode_example` fails with IndexError
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
"2022-09-22T15:07:27"
"2022-09-23T09:05:18"
"2022-09-23T09:05:18"
CONTRIBUTOR
null
## Describe the bug Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally. Don't think it's a sound file bug as the version matches what worked previously. Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly... ## Steps to reproduce the bug ```python from datasets import load_dataset earnings22 = load_dataset("sanchit-gandhi/earnings22_split") ``` ## Expected results ``` >>> earnings22 DatasetDict({ validation: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2650 }) train: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 52006 }) test: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2735 }) }) ``` ## Actual results ``` Traceback (most recent call last): File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single writer.write(example) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write self.write_examples_on_file() File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 231, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature return feature.cast_storage(array) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp> storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example sf.write(buffer, value["array"], value["sampling_rate"], format="wav") File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write channels = data.shape[1] IndexError: tuple index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3 Plus: - SoundFile version: 0.10.3.post1 cc @lhoestq @polinaeterna
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5011/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5009/comments
https://api.github.com/repos/huggingface/datasets/issues/5009/events
https://github.com/huggingface/datasets/issues/5009
1,381,194,067
I_kwDODunzps5SU1lT
5,009
Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly
{ "login": "ykl7", "id": 4996184, "node_id": "MDQ6VXNlcjQ5OTYxODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4996184?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ykl7", "html_url": "https://github.com/ykl7", "followers_url": "https://api.github.com/users/ykl7/followers", "following_url": "https://api.github.com/users/ykl7/following{/other_user}", "gists_url": "https://api.github.com/users/ykl7/gists{/gist_id}", "starred_url": "https://api.github.com/users/ykl7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ykl7/subscriptions", "organizations_url": "https://api.github.com/users/ykl7/orgs", "repos_url": "https://api.github.com/users/ykl7/repos", "events_url": "https://api.github.com/users/ykl7/events{/privacy}", "received_events_url": "https://api.github.com/users/ykl7/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
8
"2022-09-21T16:23:06"
"2022-09-29T13:07:29"
"2022-09-29T13:07:29"
NONE
null
## Describe the bug I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error. ## Steps to reproduce the bug ```python dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy') ``` ## Expected results Successfully load the `StonyBrookNLP/tellmewhy` dataset. ## Actual results ``` Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253... Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 957.46it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 299.14it/s] Traceback (most recent call last): File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module> main(args) File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main dataset = datasets.load_dataset(args.dataset_name) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split writer.write_table(table) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table pa_table = table_cast(pa_table, self._schema) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast return cast_table_to_schema(table, schema) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature casted_values = _c(array.values, feature.feature) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type int64 to null ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5009/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5005/comments
https://api.github.com/repos/huggingface/datasets/issues/5005/events
https://github.com/huggingface/datasets/issues/5005
1,380,952,960
I_kwDODunzps5ST6uA
5,005
Release 2.5.0 breaks transformers CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
"2022-09-21T13:39:19"
"2022-09-21T14:11:57"
"2022-09-21T14:11:57"
MEMBER
null
## Describe the bug As reported by @lhoestq: > see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563 this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5005/timeline
null
completed
null
null
false