url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.16B
2.27B
node_id
stringlengths
18
19
number
int64
3.86k
6.85k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
sequencelengths
0
30
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
2
33.9k
βŒ€
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3961/comments
https://api.github.com/repos/huggingface/datasets/issues/3961/events
https://github.com/huggingface/datasets/issues/3961
1,173,223,086
I_kwDODunzps5F7fau
3,961
Scores from Index at extra positions are not filtered out
{ "login": "vishalsrao", "id": 36671559, "node_id": "MDQ6VXNlcjM2NjcxNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsrao", "html_url": "https://github.com/vishalsrao", "followers_url": "https://api.github.com/users/vishalsrao/followers", "following_url": "https://api.github.com/users/vishalsrao/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions", "organizations_url": "https://api.github.com/users/vishalsrao/orgs", "repos_url": "https://api.github.com/users/vishalsrao/repos", "events_url": "https://api.github.com/users/vishalsrao/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsrao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Yes, that makes sense! Would you like to submit a PR to fix this?", "Created PR https://github.com/huggingface/datasets/pull/3971" ]
"2022-03-18T06:13:23"
"2022-04-12T14:41:58"
"2022-04-12T14:41:58"
CONTRIBUTOR
null
If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too. Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3961/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3960
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3960/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3960/comments
https://api.github.com/repos/huggingface/datasets/issues/3960/events
https://github.com/huggingface/datasets/issues/3960
1,173,148,884
I_kwDODunzps5F7NTU
3,960
Load local dataset error
{ "login": "TXacs", "id": 60869411, "node_id": "MDQ6VXNlcjYwODY5NDEx", "avatar_url": "https://avatars.githubusercontent.com/u/60869411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TXacs", "html_url": "https://github.com/TXacs", "followers_url": "https://api.github.com/users/TXacs/followers", "following_url": "https://api.github.com/users/TXacs/following{/other_user}", "gists_url": "https://api.github.com/users/TXacs/gists{/gist_id}", "starred_url": "https://api.github.com/users/TXacs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TXacs/subscriptions", "organizations_url": "https://api.github.com/users/TXacs/orgs", "repos_url": "https://api.github.com/users/TXacs/repos", "events_url": "https://api.github.com/users/TXacs/events{/privacy}", "received_events_url": "https://api.github.com/users/TXacs/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n```\r\n\r\n\r\nLet us know if that resolves the issue.", "> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n> ```\r\n> \r\n> Let us know if that resolves the issue.\r\n\r\nSorry, replied late.\r\nThanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\nResolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1281167/1281167 [00:02<00:00, 437283.97it/s]\r\nResolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 50001/50001 [00:00<00:00, 89094.29it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nDownloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...\r\nDownloading data files #0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 82289.56obj/s]\r\nDownloading data files #1: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:01<00:00, 73559.11obj/s]\r\nDownloading data files #2: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 81600.46obj/s]\r\nDownloading data files #3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:01<00:00, 79691.56obj/s]\r\nDownloading data files #4: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 82341.37obj/s]\r\nDownloading data files #5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:01<00:00, 75784.46obj/s]\r\nDownloading data files #6: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 81466.18obj/s]\r\nDownloading data files #7: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 82320.27obj/s]\r\nDownloading data files #8: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:01<00:00, 78094.00obj/s]\r\nDownloading data files #9: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 84057.59obj/s]\r\nDownloading data files #10: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 83082.31obj/s]\r\nDownloading data files #11: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:01<00:00, 79944.21obj/s]\r\nDownloading data files #12: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 84569.77obj/s]\r\nDownloading data files #13: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 84949.63obj/s]\r\nDownloading data files #14: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80073/80073 [00:00<00:00, 80666.53obj/s]\r\nDownloading data files #15: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 80072/80072 [00:01<00:00, 76723.20obj/s]\r\n^[[Bloading data files #8: 94%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 75061/80073 [00:00<00:00, 82609.89obj/s]\r\nDownloading data files #9: 85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 68120/80073 [00:00<00:00, 83868.54obj/s]\r\nDownloading data files #9: 96%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 76784/80073 [00:00<00:00, 84722.34obj/s]\r\nDownloading data files #10: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 59995/80073 [00:00<00:00, 84148.19obj/s]\r\nDownloading data files #10: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 77412/80073 [00:00<00:00, 85724.53obj/s]\r\nDownloading data files #11: 71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 57032/80073 [00:00<00:00, 79930.58obj/s]\r\nDownloading data files #11: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 73277/80073 [00:00<00:00, 78091.27obj/s]\r\nDownloading data files #12: 86%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 69125/80073 [00:00<00:00, 84723.02obj/s]\r\nDownloading data files #12: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 77803/80073 [00:00<00:00, 85351.59obj/s]\r\nDownloading data files #13: 75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 60356/80073 [00:00<00:00, 84833.35obj/s]\r\nDownloading data files #13: 97%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 77368/80073 [00:00<00:00, 84475.10obj/s]\r\nDownloading data files #14: 72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 57751/80073 [00:00<00:00, 80727.33obj/s]\r\nDownloading data files #14: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 74022/80073 [00:00<00:00, 78703.16obj/s]\r\nDownloading data files #15: 78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 62724/80072 [00:00<00:00, 78387.33obj/s]\r\nDownloading data files #15: 99%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 78933/80072 [00:01<00:00, 79353.63obj/s]\r\n```", "Wait a long time, it completed. I don't know why it's so slow...", "You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.", "> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nThanks!It's worked well.", "> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nI find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.\r\nFirst loading, it costs about 20 min in my servers.\r\n```\r\nreal\t19m23.023s\r\nuser\t21m18.360s\r\nsys\t7m59.080s\r\n```\r\n\r\nSecond reusing, it costs about 15 min in my servers.\r\n```\r\nreal\t15m20.735s\r\nuser\t12m22.979s\r\nsys\t5m46.960s\r\n```\r\n\r\nI think it's too much slow, is there other method to make it faster?", "And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n```python\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n labels = torch.tensor([example[\"labels\"] for example in examples])\r\n return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n```\r\nHow to know the keys of example?", "Loading the image files slowly, is it because the multiple processes load files at the same time?", "Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs. \r\n\r\n> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> \r\n> ```python\r\n> def collate_fn(examples):\r\n> pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> ```\r\n> \r\n> How to know the keys of example?\r\n\r\nWhat do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\n", "> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.\r\n> \r\n> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> > ```python\r\n> > def collate_fn(examples):\r\n> > pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> > labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> > return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > How to know the keys of example?\r\n> \r\n> What do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\nThanks for your reply!\r\n\r\n1. I did not record the second output, so I run it again. \r\n```\r\n(merak) txacs@master:/dat/txacs/test$ time python test.py \r\nResolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1281167/1281167 [00:02<00:00, 469497.89it/s]\r\nResolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 50001/50001 [00:00<00:00, 70123.73it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nReusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)\r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:10<00:00, 5.37s/it]\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 1281167\r\n })\r\n validation: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 50000\r\n })\r\n})\r\n\r\nreal\t10m10.413s\r\nuser\t9m33.195s\r\nsys\t2m47.528s\r\n```\r\nAlthough it cost less time than the last, but still slowly.\r\n\r\n2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'.", "Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"Λ™`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.", "> Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"Λ™`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.\r\n\r\nSounds good! The main position, which costs long time, is from program starting to `\"Resolving data files\"`. I hope you can solve it early, thanks!", "I'm getting this problem. Script has been stuck at this part for the past 15 or so minutes:\r\n \r\n`Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 107/107 [00:00<00:00, 472.74it/s]`\r\n\r\nI had everything working fine on an AWS EC2 node with a single GPU. Then I created an image based on the single GPU machine, and spun up a new one with 4 GPUs, so I got all of the training data ready at .cache. \r\n\r\nTurned off all checks with `verification_mode='no_checks'`. Logged in with huggingface-cli again just to be sure.\r\n\r\nInterrupting shows the code is stuck here:\r\n\r\n```\r\nFile \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 66, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n```\r\n\r\nIs it just going to take a while or am I going to run out of money? :sweat_smile: \r\n\r\nedit: ping @mariosasko " ]
"2022-03-18T03:32:49"
"2023-08-02T17:12:20"
null
NONE
null
When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification') [] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset **config_kwargs, File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder **config_kwargs, File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__ super().__init__(*args, **kwargs) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__ sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` I need some help to solve the problem, thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3960/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3959/comments
https://api.github.com/repos/huggingface/datasets/issues/3959/events
https://github.com/huggingface/datasets/issues/3959
1,172,872,695
I_kwDODunzps5F6J33
3,959
Medium-sized dataset conversion from pandas causes a crash
{ "login": "Antymon", "id": 641005, "node_id": "MDQ6VXNlcjY0MTAwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/641005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Antymon", "html_url": "https://github.com/Antymon", "followers_url": "https://api.github.com/users/Antymon/followers", "following_url": "https://api.github.com/users/Antymon/following{/other_user}", "gists_url": "https://api.github.com/users/Antymon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Antymon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Antymon/subscriptions", "organizations_url": "https://api.github.com/users/Antymon/orgs", "repos_url": "https://api.github.com/users/Antymon/repos", "events_url": "https://api.github.com/users/Antymon/events{/privacy}", "received_events_url": "https://api.github.com/users/Antymon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ?", "@albertvillanova did you find a solution to this?", "IΒ΄m getting the same problem with some files, @albertvillanova did you find a solution to this?" ]
"2022-03-17T20:20:35"
"2022-12-12T17:14:06"
"2022-04-20T12:35:37"
NONE
null
Hi, I am suffering from the following issue: ## Describe the bug Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash: ``` File "/home/datasets_crash.py", line 7, in <module> arrow=datasets.Dataset.from_pandas(d) File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas table = InMemoryTable.from_pandas( File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas return cls(pa.Table.from_pandas(*args, **kwargs)) File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458) ``` ## Steps to reproduce the bug I have a dataset made from replicated single example mocking a dict representation of a publication. I copy over this example 140k times and create a pandas frame. I use 'Dataset.from_pandas' and boom ```python # Sample code to reproduce the bug import copy import datasets import pandas # serialized dict is quite long to be realistic representation of a publication content paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}") d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100)) arrow=datasets.Dataset.from_pandas(d) ``` ## Expected results The dataset should be converted without error. ## Actual results Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.18.4 pandas==1.3.5 - Platform: macOS 11.6 or CentOS Linux 7 (Core) - Python version: Python 3.9.7 - PyArrow version: pyarrow==3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3959/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3958/comments
https://api.github.com/repos/huggingface/datasets/issues/3958/events
https://github.com/huggingface/datasets/pull/3958
1,172,657,981
PR_kwDODunzps40nQU2
3,958
Update Wikipedia metadata
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3958). All of your documentation changes will be reflected on that endpoint.", "Once this last PR validated, I can take care of the integration of all the wikipedia update branch into master, @lhoestq. " ]
"2022-03-17T17:50:05"
"2022-03-21T12:26:48"
"2022-03-21T12:26:47"
MEMBER
null
This PR updates: - dataset card - metadata JSON
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3958/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3958", "html_url": "https://github.com/huggingface/datasets/pull/3958", "diff_url": "https://github.com/huggingface/datasets/pull/3958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3958.patch", "merged_at": "2022-03-21T12:26:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/3957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3957/comments
https://api.github.com/repos/huggingface/datasets/issues/3957/events
https://github.com/huggingface/datasets/pull/3957
1,172,401,455
PR_kwDODunzps40magW
3,957
Fix xtreme s metrics
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sorry for the commit history mess, but will be squashed anyways so should be fine", "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-17T13:39:04"
"2022-03-18T13:46:19"
"2022-03-18T13:42:16"
CONTRIBUTOR
null
We in fact do need BABEL in xtreme-s
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3957/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3957", "html_url": "https://github.com/huggingface/datasets/pull/3957", "diff_url": "https://github.com/huggingface/datasets/pull/3957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3957.patch", "merged_at": "2022-03-18T13:42:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/3956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3956/comments
https://api.github.com/repos/huggingface/datasets/issues/3956/events
https://github.com/huggingface/datasets/issues/3956
1,172,272,327
I_kwDODunzps5F33TH
3,956
TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "login": "amirj", "id": 1645137, "node_id": "MDQ6VXNlcjE2NDUxMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amirj", "html_url": "https://github.com/amirj", "followers_url": "https://api.github.com/users/amirj/followers", "following_url": "https://api.github.com/users/amirj/following{/other_user}", "gists_url": "https://api.github.com/users/amirj/gists{/gist_id}", "starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amirj/subscriptions", "organizations_url": "https://api.github.com/users/amirj/orgs", "repos_url": "https://api.github.com/users/amirj/repos", "events_url": "https://api.github.com/users/amirj/events{/privacy}", "received_events_url": "https://api.github.com/users/amirj/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @amirj, thanks for reporting.\r\n\r\nAt first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.\r\n\r\nFeel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made.", "@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:\r\n\r\n```\r\nfrom elasticsearch import Elasticsearch\r\nes_client = Elasticsearch(\"http://localhost:9200\")\r\ndataset.add_elasticsearch_index(column=\"e1\", es_client=es_client, es_index_name=\"e1_index\")\r\n```", "Hi @amirj, \r\n\r\nI really think it is a version incompatibility issue between your Elasticsearch client and server:\r\n- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'\r\n- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`\r\n\r\nMoreover:\r\n- Looking at your stack trace, I deduce you are using Elasticsearch client **\"8\"** major version:\r\n - the Elasticsearch file \"elasticsearch/_sync/client/utils.py\" was created in version \"8.0.0a1\": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4\r\n - you can check your Elasticsearch client version by running this Python code:\r\n ```python\r\n import elasticsearch\r\n print(elasticsearch.__version__)\r\n ```\r\n\r\n- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **\"7\"** (\"7.10.2-SNAPSHOT\")\r\n\r\nCould you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists?", "I'm closing this issue, @amirj.\r\n\r\nFeel free to re-open it if the problem persists. \r\n\r\n", "```\r\nfrom elasticsearch import Elasticsearch\r\nes = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n```\r\n```\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-8-675c6ffe5293> in <module>\r\n 1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])\r\n 2 from elasticsearch import Elasticsearch\r\n----> 3 es = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)\r\n 310 \r\n 311 if _transport is None:\r\n--> 312 node_configs = client_node_configs(\r\n 313 hosts,\r\n 314 cloud_id=cloud_id,\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in client_node_configs(hosts, cloud_id, **kwargs)\r\n 99 else:\r\n 100 assert hosts is not None\r\n--> 101 node_configs = hosts_to_node_configs(hosts)\r\n 102 \r\n 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in hosts_to_node_configs(hosts)\r\n 142 \r\n 143 elif isinstance(host, Mapping):\r\n--> 144 node_configs.append(host_mapping_to_node_config(host))\r\n 145 else:\r\n 146 raise ValueError(\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in host_mapping_to_node_config(host)\r\n 209 options[\"path_prefix\"] = options.pop(\"url_prefix\")\r\n 210 \r\n--> 211 return NodeConfig(**options) # type: ignore\r\n 212 \r\n 213 \r\n\r\nTypeError: __init__() missing 1 required positional argument: 'scheme'\r\n```", "I am facing the same issue, and version is same for the both i.e(8.1.3)", "@raj713335, thanks for reporting.\r\n\r\nPlease note that in your code example, you are not using our `datasets` library. \r\n\r\nThus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py\r\n\r\n", "it is simple hack which shock you just replace https to http in scheme\r\n\r\n**In My Case:** ->\r\n\r\n`es = Elasticsearch([{'host': 'localhost', 'port': 9200, \"scheme\": \"http\"}])\r\n if es.ping():\r\n print('Connected to ES!')\r\n else:\r\n print('Could not connect!')\r\n sys.exit()`" ]
"2022-03-17T11:43:13"
"2023-11-21T04:26:20"
"2022-03-28T08:00:01"
NONE
null
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset squad = load_dataset('squad', split='validation') squad.add_elasticsearch_index("context", host="localhost", port="9200") ``` ## Expected results [Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-8fb51aa33961> in <module> 1 from datasets import load_dataset 2 squad = load_dataset('squad', split='validation') ----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200") ~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 3777 """ 3778 with self.formatted_as(type=None, columns=[column]): -> 3779 super().add_elasticsearch_index( 3780 column=column, 3781 index_name=index_name, ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 587 """ 588 index_name = index_name if index_name is not None else column --> 589 es_index = ElasticSearchIndex( 590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config 591 ) ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config) 123 from elasticsearch import Elasticsearch # noqa: F811 124 --> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}]) 126 self.es_index_name = ( 127 es_index_name ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport) 310 311 if _transport is None: --> 312 node_configs = client_node_configs( 313 hosts, 314 cloud_id=cloud_id, ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs) 99 else: 100 assert hosts is not None --> 101 node_configs = hosts_to_node_configs(hosts) 102 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults. ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts) 142 143 elif isinstance(host, Mapping): --> 144 node_configs.append(host_mapping_to_node_config(host)) 145 else: 146 raise ValueError( ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Mac - Python version: 3.8.0 - PyArrow version: 7.0.0 - ElaticSearch Info: { "name" : "byname", "cluster_name" : "elasticsearch_brew", "cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA", "version" : { "number" : "7.10.2-SNAPSHOT", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "build_date" : "2021-01-16T01:41:27.115673Z", "build_snapshot" : true, "lucene_version" : "8.7.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3956/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3956/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3955/comments
https://api.github.com/repos/huggingface/datasets/issues/3955/events
https://github.com/huggingface/datasets/pull/3955
1,172,246,647
PR_kwDODunzps40l5kG
3,955
Remove unncessary 'pylint disable' message in ReadMe
{ "login": "Datta0", "id": 39181234, "node_id": "MDQ6VXNlcjM5MTgxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/39181234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Datta0", "html_url": "https://github.com/Datta0", "followers_url": "https://api.github.com/users/Datta0/followers", "following_url": "https://api.github.com/users/Datta0/following{/other_user}", "gists_url": "https://api.github.com/users/Datta0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Datta0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Datta0/subscriptions", "organizations_url": "https://api.github.com/users/Datta0/orgs", "repos_url": "https://api.github.com/users/Datta0/repos", "events_url": "https://api.github.com/users/Datta0/events{/privacy}", "received_events_url": "https://api.github.com/users/Datta0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-17T11:16:55"
"2022-04-12T14:28:35"
"2022-04-12T14:28:35"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3955/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3955", "html_url": "https://github.com/huggingface/datasets/pull/3955", "diff_url": "https://github.com/huggingface/datasets/pull/3955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3955.patch", "merged_at": "2022-04-12T14:28:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/3954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3954/comments
https://api.github.com/repos/huggingface/datasets/issues/3954/events
https://github.com/huggingface/datasets/issues/3954
1,172,141,664
I_kwDODunzps5F3XZg
3,954
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
{ "login": "MatanBenChorin", "id": 49593805, "node_id": "MDQ6VXNlcjQ5NTkzODA1", "avatar_url": "https://avatars.githubusercontent.com/u/49593805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatanBenChorin", "html_url": "https://github.com/MatanBenChorin", "followers_url": "https://api.github.com/users/MatanBenChorin/followers", "following_url": "https://api.github.com/users/MatanBenChorin/following{/other_user}", "gists_url": "https://api.github.com/users/MatanBenChorin/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatanBenChorin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatanBenChorin/subscriptions", "organizations_url": "https://api.github.com/users/MatanBenChorin/orgs", "repos_url": "https://api.github.com/users/MatanBenChorin/repos", "events_url": "https://api.github.com/users/MatanBenChorin/events{/privacy}", "received_events_url": "https://api.github.com/users/MatanBenChorin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.", "Hi, \r\nThank you", "Thanks for reporting. We are looking at it and will give updates here.", "I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:\r\n\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: NameError\r\nMessage: name 'HebrewSquad' is not defined\r\n```", "The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> hf_token = \"hf_...\" # <- required because the dataset is gated\r\n>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)\r\n...\r\nNameError: name 'HebrewSquad' is not defined\r\n```", "Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)\r\n\r\nHere is the fix @MatanBenChorin :\r\n\r\n```diff\r\n- HebrewSquad(\r\n+ HebrewSquadConfig(\r\n```" ]
"2022-03-17T09:38:11"
"2022-04-20T12:39:07"
"2022-04-20T12:39:07"
NONE
null
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1' **Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true The dataset preview is not available for this dataset. Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3954/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3953/comments
https://api.github.com/repos/huggingface/datasets/issues/3953/events
https://github.com/huggingface/datasets/issues/3953
1,172,123,736
I_kwDODunzps5F3TBY
3,953
Add ImageNet Sketch
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
null
[]
null
[ "Can you assign this task to me? @nreimers @mariosasko ", "Hi! Sure! Let us know if you need any pointers." ]
"2022-03-17T09:20:31"
"2022-05-23T18:05:29"
"2022-05-23T18:05:29"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** ImageNet Sketch - **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale. - **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549) - **Data:** https://github.com/HaohanWang/ImageNet-Sketch - **Motivation:** Allows for evaluating the robustness of vision models. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3953/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3952/comments
https://api.github.com/repos/huggingface/datasets/issues/3952/events
https://github.com/huggingface/datasets/issues/3952
1,171,895,531
I_kwDODunzps5F2bTr
3,952
Checksum error for glue sst2, stsb, rte etc datasets
{ "login": "ravindra-ut", "id": 22090962, "node_id": "MDQ6VXNlcjIyMDkwOTYy", "avatar_url": "https://avatars.githubusercontent.com/u/22090962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ravindra-ut", "html_url": "https://github.com/ravindra-ut", "followers_url": "https://api.github.com/users/ravindra-ut/followers", "following_url": "https://api.github.com/users/ravindra-ut/following{/other_user}", "gists_url": "https://api.github.com/users/ravindra-ut/gists{/gist_id}", "starred_url": "https://api.github.com/users/ravindra-ut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravindra-ut/subscriptions", "organizations_url": "https://api.github.com/users/ravindra-ut/orgs", "repos_url": "https://api.github.com/users/ravindra-ut/repos", "events_url": "https://api.github.com/users/ravindra-ut/events{/privacy}", "received_events_url": "https://api.github.com/users/ravindra-ut/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, @ravindra-ut.\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"glue\", \"sst2\")\r\nDownloading builder script: 28.8kB [00:00, 11.6MB/s] \r\nDownloading metadata: 28.7kB [00:00, 12.9MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.44M/7.44M [00:01<00:00, 5.82MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 895.96it/s]\r\n\r\nIn [3]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n``` \r\n\r\nMoreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com\r\nHowever, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229\r\n\r\nCould you please try to update `datasets`\r\n```shell\r\npip install -U datasets\r\n```\r\nand then force redownload\r\n```python\r\nds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\n```\r\nto update the cache?\r\n\r\nPlease, feel free to reopen this issue if the problem persists." ]
"2022-03-17T03:45:47"
"2022-03-17T07:10:15"
"2022-03-17T07:10:14"
NONE
null
## Describe the bug Checksum error for glue sst2, stsb, rte etc datasets ## Steps to reproduce the bug ```python >>> nlp.load_dataset('glue', 'sst2') Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 73.0/73.0 [00:00<00:00, 18.2kB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Expected results dataset load should succeed without checksum error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Environment info - `datasets` version: '1.18.3' - Platform: Mac OS - Python version: Python 3.8.9 - PyArrow version: '7.0.0'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3952/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3951/comments
https://api.github.com/repos/huggingface/datasets/issues/3951/events
https://github.com/huggingface/datasets/issues/3951
1,171,568,814
I_kwDODunzps5F1Liu
3,951
Forked streaming datasets try to `open` data urls rather than use network
{ "login": "dlwh", "id": 9633, "node_id": "MDQ6VXNlcjk2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlwh", "html_url": "https://github.com/dlwh", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "organizations_url": "https://api.github.com/users/dlwh/orgs", "repos_url": "https://api.github.com/users/dlwh/repos", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "received_events_url": "https://api.github.com/users/dlwh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this" ]
"2022-03-16T21:21:02"
"2022-06-10T20:47:26"
"2022-06-10T20:47:26"
NONE
null
## Describe the bug Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else. ## Steps to reproduce the bug ```python from multiprocessing import freeze_support import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets import torch.utils.data # work around #3950 class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset): pass def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset: return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling) if __name__ == '__main__': freeze_support() ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) ds = _ensure_format(ds) model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results I'd expect the dataset to load the url correctly and produce examples. ## Actual results ``` warnings.warn( ***** Running training ***** Num examples = 8000 Num Epochs = 9223372036854775807 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 1000 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__ for key, example in self._iter(): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter yield from ex_iterable File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz' Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll pid, sts = os.waitpid(self.pid, flag) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15. 0%| | 0/1000 [00:02<?, ?it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3951/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3950/comments
https://api.github.com/repos/huggingface/datasets/issues/3950/events
https://github.com/huggingface/datasets/issues/3950
1,171,560,585
I_kwDODunzps5F1JiJ
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
{ "login": "dlwh", "id": 9633, "node_id": "MDQ6VXNlcjk2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlwh", "html_url": "https://github.com/dlwh", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "organizations_url": "https://api.github.com/users/dlwh/orgs", "repos_url": "https://api.github.com/users/dlwh/repos", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "received_events_url": "https://api.github.com/users/dlwh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also taking a look at your second issue, which is more technical" ]
"2022-03-16T21:14:11"
"2022-06-10T20:47:26"
"2022-06-10T20:47:26"
NONE
null
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch") model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error. ## Actual results ``` 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__ return self._get_iterator() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__ w.start() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset' 0%| | 0/1000 [00:00<?, ?it/s] ``` This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together) Note that if you bypass this crash you get another crash. (I'll file a separate bug). ## Environment info - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3950/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3949/comments
https://api.github.com/repos/huggingface/datasets/issues/3949/events
https://github.com/huggingface/datasets/pull/3949
1,171,467,981
PR_kwDODunzps40jia-
3,949
Remove GLEU metric
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-16T19:35:31"
"2022-04-12T20:43:26"
"2022-04-12T20:37:09"
CONTRIBUTOR
null
Remove the GLEU metric as it is not actually implemented.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3949/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3949/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3949", "html_url": "https://github.com/huggingface/datasets/pull/3949", "diff_url": "https://github.com/huggingface/datasets/pull/3949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3949.patch", "merged_at": "2022-04-12T20:37:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/3948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3948/comments
https://api.github.com/repos/huggingface/datasets/issues/3948/events
https://github.com/huggingface/datasets/pull/3948
1,171,460,560
PR_kwDODunzps40jg1F
3,948
Google BLEU Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "A few things that aren't clear for me:\r\n- \"Because it performs better on individual sentence pairs as compared to BLEU, Google BLEU has also been used in RL experiments.\" -- why is this the case? why would that make it more usable for RL? (also, you should put \"Reinforcement Learning\" explicitly, not just the acronym)\r\n- (Minor issue) -- I put inputs before the first example code, I think that's clearer somehow\r\n\r\nOtherwise, it looks great, good job @emibaylor !\r\n" ]
"2022-03-16T19:27:17"
"2022-03-21T16:04:26"
"2022-03-21T16:04:25"
CONTRIBUTOR
null
Add metric card for Google BLEU (GLEU) metric One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing this would be a good thing to put on the to do list for the future.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3948/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3948", "html_url": "https://github.com/huggingface/datasets/pull/3948", "diff_url": "https://github.com/huggingface/datasets/pull/3948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3948.patch", "merged_at": "2022-03-21T16:04:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/3947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3947/comments
https://api.github.com/repos/huggingface/datasets/issues/3947/events
https://github.com/huggingface/datasets/pull/3947
1,171,452,854
PR_kwDODunzps40jfLq
3,947
BLEU metric card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Some thoughts:\r\n- For values, e.g. \"Defaults to False\", I would put False in code: `False`. Same for : \"Defaults to `4`.\"\r\n- I would put the following remark in \"Limitations\": \r\n> \"BLEU's output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.\"\r\n\r\n- Add some values from the original BLEU paper (https://aclanthology.org/P02-1040.pdf)" ]
"2022-03-16T19:20:07"
"2022-03-29T14:59:50"
"2022-03-29T14:54:14"
CONTRIBUTOR
null
Add BLEU metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3947/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3947", "html_url": "https://github.com/huggingface/datasets/pull/3947", "diff_url": "https://github.com/huggingface/datasets/pull/3947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3947.patch", "merged_at": "2022-03-29T14:54:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/3946
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3946/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3946/comments
https://api.github.com/repos/huggingface/datasets/issues/3946/events
https://github.com/huggingface/datasets/pull/3946
1,171,239,287
PR_kwDODunzps40i1L3
3,946
Add newline to text dataset builder for controlling universal newlines mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3946). All of your documentation changes will be reflected on that endpoint.", "The failing CI test has nothing to do with this PR.", "I'm closing this PR." ]
"2022-03-16T16:11:11"
"2023-09-24T10:10:50"
"2023-09-24T10:10:47"
MEMBER
null
Fix #3804.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3946/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3946/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3946", "html_url": "https://github.com/huggingface/datasets/pull/3946", "diff_url": "https://github.com/huggingface/datasets/pull/3946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3946.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3945/comments
https://api.github.com/repos/huggingface/datasets/issues/3945/events
https://github.com/huggingface/datasets/pull/3945
1,171,222,257
PR_kwDODunzps40ixmc
3,945
Fix comet metric
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Finally I'm done updating the dependencies ^^'\r\n\r\ncc @sashavor can you review my changes in the metric card please ?", "Looks good to me! Just fixed a tiny typo :wink: ", "Thanks !" ]
"2022-03-16T15:56:47"
"2022-03-22T15:10:12"
"2022-03-22T15:05:30"
MEMBER
null
The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed. This PR fixes the metric, updates the download_model mock and updates the doctest.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3945/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3945", "html_url": "https://github.com/huggingface/datasets/pull/3945", "diff_url": "https://github.com/huggingface/datasets/pull/3945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3945.patch", "merged_at": "2022-03-22T15:05:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/3944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3944/comments
https://api.github.com/repos/huggingface/datasets/issues/3944/events
https://github.com/huggingface/datasets/pull/3944
1,171,209,510
PR_kwDODunzps40iu4n
3,944
Create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-16T15:46:26"
"2022-03-17T17:50:54"
"2022-03-17T17:47:05"
NONE
null
Proposing COMET metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3944/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3944", "html_url": "https://github.com/huggingface/datasets/pull/3944", "diff_url": "https://github.com/huggingface/datasets/pull/3944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3944.patch", "merged_at": "2022-03-17T17:47:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3943/comments
https://api.github.com/repos/huggingface/datasets/issues/3943/events
https://github.com/huggingface/datasets/pull/3943
1,171,185,070
PR_kwDODunzps40ipnu
3,943
[Doc] Don't use v for version tags on GitHub
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3943). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-16T15:28:30"
"2022-03-17T11:46:26"
"2022-03-17T11:46:25"
CONTRIBUTOR
null
This removes the `v` automatically used by `doc-builder` for versions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3943/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3943", "html_url": "https://github.com/huggingface/datasets/pull/3943", "diff_url": "https://github.com/huggingface/datasets/pull/3943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3943.patch", "merged_at": "2022-03-17T11:46:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/3942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3942/comments
https://api.github.com/repos/huggingface/datasets/issues/3942/events
https://github.com/huggingface/datasets/issues/3942
1,171,177,122
I_kwDODunzps5Fzr6i
3,942
reddit_tifu dataset: Checksums didn't match for dataset source files
{ "login": "XingxingZhang", "id": 8507585, "node_id": "MDQ6VXNlcjg1MDc1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XingxingZhang", "html_url": "https://github.com/XingxingZhang", "followers_url": "https://api.github.com/users/XingxingZhang/followers", "following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions", "organizations_url": "https://api.github.com/users/XingxingZhang/orgs", "repos_url": "https://api.github.com/users/XingxingZhang/repos", "events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/XingxingZhang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773", "thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset(\"...\", download_mode=\"force_redownload\")` fixed \r\n the bug.\r\n\r\nusing the following as you suggested in another thread can also fixed the bug\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n", "The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.\r\n\r\nYou can now install from PyPI, as usual:\r\n```shell\r\npip install -U datasets\r\n```\r\n" ]
"2022-03-16T15:23:30"
"2022-03-16T15:57:43"
"2022-03-16T15:39:25"
NONE
null
## Describe the bug When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files" ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) # load_dataset('billsum') load_dataset('reddit_tifu', 'short') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3942/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3941/comments
https://api.github.com/repos/huggingface/datasets/issues/3941/events
https://github.com/huggingface/datasets/issues/3941
1,171,132,709
I_kwDODunzps5FzhEl
3,941
billsum dataset: Checksums didn't match for dataset source files:
{ "login": "XingxingZhang", "id": 8507585, "node_id": "MDQ6VXNlcjg1MDc1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XingxingZhang", "html_url": "https://github.com/XingxingZhang", "followers_url": "https://api.github.com/users/XingxingZhang/followers", "following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions", "organizations_url": "https://api.github.com/users/XingxingZhang/orgs", "repos_url": "https://api.github.com/users/XingxingZhang/repos", "events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/XingxingZhang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @XingxingZhang, thanks for reporting.\r\n\r\nThis was due to a change in Google Drive service:\r\n- #3786 \r\n\r\nWe have already fixed it:\r\n- #3787\r\n\r\nYou should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```", "thanks @albertvillanova ", "@albertvillanova \r\nYOU Said: pip install git+ https://github.com/huggingface/datasets.git Then set: dataset=load_dataset (\"multinews\", download_mode=\"force-redownload\"). I changed the ’datautilsβ€˜ file according to this setting: traindata=load_dataset (path='wikitext ', name='wikitext-2-raw v1', split='train ', download_mode=\"force-redownload\")\r\nTestdata=load_dataset (path='wikitext ', name='wikitext-2-raw v1', split='test ', download_mode=\"force-redownload\")\r\nthen the bug is\r\n![image](https://github.com/huggingface/datasets/assets/149936473/ee956e8f-e6f1-46bf-b514-0a7a0a0e0e37)\r\n![image](https://github.com/huggingface/datasets/assets/149936473/f1318686-942c-4341-a61d-9be7a4b5747a)\r\nI have tried both versions\r\n![image](https://github.com/huggingface/datasets/assets/149936473/d3f0d786-304b-4596-83c6-49c3cac58aad)\r\n" ]
"2022-03-16T14:52:08"
"2024-03-13T12:11:35"
"2022-03-16T15:46:44"
NONE
null
## Describe the bug When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files" ``` File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx'] ``` ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) load_dataset('billsum') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3941/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3940/comments
https://api.github.com/repos/huggingface/datasets/issues/3940/events
https://github.com/huggingface/datasets/pull/3940
1,171,106,853
PR_kwDODunzps40iYxr
3,940
Create CoVAL metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-16T14:31:49"
"2022-03-18T17:37:59"
"2022-03-18T17:35:14"
NONE
null
Initial CoVAL metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3940/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3940", "html_url": "https://github.com/huggingface/datasets/pull/3940", "diff_url": "https://github.com/huggingface/datasets/pull/3940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3940.patch", "merged_at": "2022-03-18T17:35:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/3939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3939/comments
https://api.github.com/repos/huggingface/datasets/issues/3939/events
https://github.com/huggingface/datasets/issues/3939
1,170,882,331
I_kwDODunzps5Fyj8b
3,939
Source links broken
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/", "@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).\r\n\r\nTherefore, we have to do one of 2 options below:\r\n1. Make necessary changes on doc-builder side\r\nOR\r\n2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)\r\n\r\nI'll let you decide @albertvillanova @lhoestq @sgugger ", "I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-)", "For me it is OK to conform to the rest of libraries and tag/release with a preceding \"v\", rather than adding an extra argument to the doc builder just for `datasets`.\r\n\r\nLet me know if it is also OK for you @lhoestq. ", "https://github.com/huggingface/doc-build/commit/f41c1e8ff900724213af4c75d287d8b61ecf6141\r\n\r\nhotfix so that `datasets` docs source button works correctly on hf.co/docs/datasets", "We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.\r\n\r\nI think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine", "This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).\r\n\r\nNote that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release)", "Thanks for fixing @sgugger." ]
"2022-03-16T11:17:47"
"2022-03-19T04:41:32"
"2022-03-19T04:41:32"
CONTRIBUTOR
null
## Describe the bug The source links of v2.0.0 docs are broken: For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747` here, the `v2.0.0` should be `2.0.0`. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747` ## Actual results Described above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3939/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3938/comments
https://api.github.com/repos/huggingface/datasets/issues/3938/events
https://github.com/huggingface/datasets/pull/3938
1,170,875,417
PR_kwDODunzps40hnjM
3,938
Avoid info log messages from transformers in FrugalScore metric
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3938). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-16T11:11:29"
"2022-03-17T08:37:25"
"2022-03-17T08:37:24"
MEMBER
null
Fix #3928.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3938/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3938", "html_url": "https://github.com/huggingface/datasets/pull/3938", "diff_url": "https://github.com/huggingface/datasets/pull/3938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3938.patch", "merged_at": "2022-03-17T08:37:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/3937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3937/comments
https://api.github.com/repos/huggingface/datasets/issues/3937/events
https://github.com/huggingface/datasets/issues/3937
1,170,832,006
I_kwDODunzps5FyXqG
3,937
Missing languages in lvwerra/github-code dataset
{ "login": "Eytan-S", "id": 38702500, "node_id": "MDQ6VXNlcjM4NzAyNTAw", "avatar_url": "https://avatars.githubusercontent.com/u/38702500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Eytan-S", "html_url": "https://github.com/Eytan-S", "followers_url": "https://api.github.com/users/Eytan-S/followers", "following_url": "https://api.github.com/users/Eytan-S/following{/other_user}", "gists_url": "https://api.github.com/users/Eytan-S/gists{/gist_id}", "starred_url": "https://api.github.com/users/Eytan-S/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Eytan-S/subscriptions", "organizations_url": "https://api.github.com/users/Eytan-S/orgs", "repos_url": "https://api.github.com/users/Eytan-S/repos", "events_url": "https://api.github.com/users/Eytan-S/events{/privacy}", "received_events_url": "https://api.github.com/users/Eytan-S/received_events", "type": "User", "site_admin": false }
[ { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
closed
false
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[ { "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ", "That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.\r\n\r\nThanks for reporting this @Eytan-S!", "Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:\r\n```Python\r\n{'Assembly': 82847,\r\n 'Batchfile': 236755,\r\n 'C': 14127969,\r\n 'C#': 6793439,\r\n 'C++': 7368473,\r\n 'CMake': 175076,\r\n 'CSS': 1733625,\r\n 'Dockerfile': 331966,\r\n 'FORTRAN': 141963,\r\n 'GO': 2259363,\r\n 'Haskell': 340521,\r\n 'HTML': 11165464,\r\n 'Java': 19515696,\r\n 'JavaScript': 11829024,\r\n 'Julia': 58177,\r\n 'Lua': 576279,\r\n 'Makefile': 679338,\r\n 'Markdown': 8454049,\r\n 'PHP': 11181930,\r\n 'Perl': 497490,\r\n 'PowerShell': 136827,\r\n 'Python': 7203553,\r\n 'Ruby': 4479767,\r\n 'Rust': 321765,\r\n 'SQL': 655657,\r\n 'Scala': 0,\r\n 'Shell': 1382786,\r\n 'TypeScript': 0,\r\n 'TeX': 250764,\r\n 'Visual Basic': 155371}\r\n ```", "@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:\r\n\r\n| | Language |File Count| Size (GB)|\r\n|---:|:-------------|---------:|-------:|\r\n| 0 | Java | 19548190 | 107.7 |\r\n| 1 | C | 14143113 | 183.83 |\r\n| 2 | JavaScript | 11839883 | 87.82 |\r\n| 3 | HTML | 11178557 | 118.12 |\r\n| 4 | PHP | 11177610 | 61.41 |\r\n| 5 | Markdown | 8464626 | 23.09 |\r\n| 6 | C++ | 7380520 | 87.73 |\r\n| 7 | Python | 7226626 | 52.03 |\r\n| 8 | C# | 6811652 | 36.83 |\r\n| 9 | Ruby | 4473331 | 10.95 |\r\n| 10 | GO | 2265436 | 19.28 |\r\n| 11 | TypeScript | 1940406 | 24.59 |\r\n| 12 | CSS | 1734406 | 22.67 |\r\n| 13 | Shell | 1385648 | 3.01 |\r\n| 14 | Scala | 835755 | 3.87 |\r\n| 15 | Makefile | 679430 | 2.92 |\r\n| 16 | SQL | 656671 | 5.67 |\r\n| 17 | Lua | 578554 | 2.81 |\r\n| 18 | Perl | 497949 | 4.7 |\r\n| 19 | Dockerfile | 366505 | 0.71 |\r\n| 20 | Haskell | 340623 | 1.85 |\r\n| 21 | Rust | 322431 | 2.68 |\r\n| 22 | TeX | 251015 | 2.15 |\r\n| 23 | Batchfile | 236945 | 0.7 |\r\n| 24 | CMake | 175282 | 0.54 |\r\n| 25 | Visual Basic | 155652 | 1.91 |\r\n| 26 | FORTRAN | 142038 | 1.62 |\r\n| 27 | PowerShell | 136846 | 0.69 |\r\n| 28 | Assembly | 82905 | 0.78 |\r\n| 29 | Julia | 58317 | 0.29 |", "Thanks @lvwerra. " ]
"2022-03-16T10:32:03"
"2022-03-22T07:09:23"
"2022-03-21T14:50:47"
NONE
null
Hi, I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset! I've noticed that two languages are missing from the dataset: TypeScript and Scala. Looks like they're also omitted from the query you used to get the original code. Are there any plans to add them in the future? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3937/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3936/comments
https://api.github.com/repos/huggingface/datasets/issues/3936/events
https://github.com/huggingface/datasets/pull/3936
1,170,713,473
PR_kwDODunzps40hE-P
3,936
Fix Wikipedia version and re-add tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-16T08:48:04"
"2022-03-16T17:04:07"
"2022-03-16T17:04:05"
MEMBER
null
To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301": - de - en - fr - frr - it - simple These pre-processed data can be accessed, e.g.: ```python ds = load_dataset("wikipedia", "20220301.frr", split="train") ``` The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3936/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3936", "html_url": "https://github.com/huggingface/datasets/pull/3936", "diff_url": "https://github.com/huggingface/datasets/pull/3936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3936.patch", "merged_at": "2022-03-16T17:04:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3934/comments
https://api.github.com/repos/huggingface/datasets/issues/3934/events
https://github.com/huggingface/datasets/pull/3934
1,170,292,492
PR_kwDODunzps40ftiC
3,934
Create MAUVE metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T21:36:07"
"2022-03-18T17:38:14"
"2022-03-18T17:34:13"
NONE
null
Proposing a MAUVE metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3934/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3934", "html_url": "https://github.com/huggingface/datasets/pull/3934", "diff_url": "https://github.com/huggingface/datasets/pull/3934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3934.patch", "merged_at": "2022-03-18T17:34:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/3933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3933/comments
https://api.github.com/repos/huggingface/datasets/issues/3933/events
https://github.com/huggingface/datasets/pull/3933
1,170,253,605
PR_kwDODunzps40flNM
3,933
Update README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T20:52:05"
"2022-03-17T17:51:24"
"2022-03-17T17:47:37"
NONE
null
Fixing missing triple quote
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3933/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3933", "html_url": "https://github.com/huggingface/datasets/pull/3933", "diff_url": "https://github.com/huggingface/datasets/pull/3933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3933.patch", "merged_at": "2022-03-17T17:47:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3932/comments
https://api.github.com/repos/huggingface/datasets/issues/3932/events
https://github.com/huggingface/datasets/pull/3932
1,170,221,773
PR_kwDODunzps40fd0T
3,932
Create SARI metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T20:37:23"
"2022-03-18T17:37:01"
"2022-03-18T17:32:55"
NONE
null
SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3932/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3932", "html_url": "https://github.com/huggingface/datasets/pull/3932", "diff_url": "https://github.com/huggingface/datasets/pull/3932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3932.patch", "merged_at": "2022-03-18T17:32:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/3931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3931/comments
https://api.github.com/repos/huggingface/datasets/issues/3931/events
https://github.com/huggingface/datasets/pull/3931
1,170,097,208
PR_kwDODunzps40fBjx
3,931
Add align_labels_with_mapping docs
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T19:24:57"
"2022-03-18T16:28:31"
"2022-03-18T16:24:33"
MEMBER
null
This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko πŸŽ‰ ). For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3931/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3931", "html_url": "https://github.com/huggingface/datasets/pull/3931", "diff_url": "https://github.com/huggingface/datasets/pull/3931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3931.patch", "merged_at": "2022-03-18T16:24:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/3930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3930/comments
https://api.github.com/repos/huggingface/datasets/issues/3930/events
https://github.com/huggingface/datasets/pull/3930
1,170,087,793
PR_kwDODunzps40e_fb
3,930
Create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-15T19:16:59"
"2022-04-04T15:23:15"
"2022-04-04T15:17:28"
NONE
null
Creating a README for IndicGLUE cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3930/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3930", "html_url": "https://github.com/huggingface/datasets/pull/3930", "diff_url": "https://github.com/huggingface/datasets/pull/3930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3930.patch", "merged_at": "2022-04-04T15:17:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/3929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3929/comments
https://api.github.com/repos/huggingface/datasets/issues/3929/events
https://github.com/huggingface/datasets/issues/3929
1,170,066,235
I_kwDODunzps5Fvcs7
3,929
Load a local dataset twice
{ "login": "caush", "id": 28349961, "node_id": "MDQ6VXNlcjI4MzQ5OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/28349961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/caush", "html_url": "https://github.com/caush", "followers_url": "https://api.github.com/users/caush/followers", "following_url": "https://api.github.com/users/caush/following{/other_user}", "gists_url": "https://api.github.com/users/caush/gists{/gist_id}", "starred_url": "https://api.github.com/users/caush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/caush/subscriptions", "organizations_url": "https://api.github.com/users/caush/orgs", "repos_url": "https://api.github.com/users/caush/repos", "events_url": "https://api.github.com/users/caush/events{/privacy}", "received_events_url": "https://api.github.com/users/caush/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_dataset(\"csv\", data_dir=\"data\")\r\n```\r\n\r\nAlternatively, you may also use:\r\n```python\r\ndataset = load_dataset(\"data\")" ]
"2022-03-15T18:59:26"
"2022-03-16T09:55:09"
"2022-03-16T09:54:06"
NONE
null
## Describe the bug Load a local "dataset" composed of two csv files twice. ## Steps to reproduce the bug Put the two joined files in a repository named "Data". Then in python: import datasets as ds ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'}) ## Expected results Should give something like (because files have only one data row): Title, clicks Truc et astuce, 123 Machin, 12 ## Actual results Gives Title, clicks Truc et astuce, 123 Machin, 12 Truc et astuce, 123 Machin, 12 ## Environment info [file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv) [file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv) - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3929/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3928/comments
https://api.github.com/repos/huggingface/datasets/issues/3928/events
https://github.com/huggingface/datasets/issues/3928
1,170,017,132
I_kwDODunzps5FvQts
3,928
Frugal score deprecations
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. " ]
"2022-03-15T18:10:42"
"2022-03-17T08:37:24"
"2022-03-17T08:37:24"
NONE
null
## Describe the bug The frugal score returns a really verbose output with warnings that can be easily changed. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets.load import load_metric frugal = load_metric("frugalscore") frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"]) ``` ## Expected results A clear and concise description of the expected results. ``` {'scores': [0.9946]} ``` ## Actual results Specify the actual results or traceback. ``` PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 864.09ba/s] Using amp half precision backend The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Prediction ***** Num examples = 1 Batch size = 64 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 4644.85it/s] {'scores': [0.9946]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3928/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3927/comments
https://api.github.com/repos/huggingface/datasets/issues/3927/events
https://github.com/huggingface/datasets/pull/3927
1,170,016,465
PR_kwDODunzps40ewN2
3,927
Update main readme
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "What do you think @albertvillanova ?" ]
"2022-03-15T18:09:59"
"2022-03-29T10:13:47"
"2022-03-29T10:08:20"
MEMBER
null
The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3927/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3927", "html_url": "https://github.com/huggingface/datasets/pull/3927", "diff_url": "https://github.com/huggingface/datasets/pull/3927.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3927.patch", "merged_at": "2022-03-29T10:08:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/3926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3926/comments
https://api.github.com/repos/huggingface/datasets/issues/3926/events
https://github.com/huggingface/datasets/pull/3926
1,169,945,052
PR_kwDODunzps40ehVP
3,926
Doc maintenance
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3926). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-15T17:00:46"
"2022-03-15T19:27:15"
"2022-03-15T19:27:12"
MEMBER
null
This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3926/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3926", "html_url": "https://github.com/huggingface/datasets/pull/3926", "diff_url": "https://github.com/huggingface/datasets/pull/3926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3926.patch", "merged_at": "2022-03-15T19:27:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/3925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3925/comments
https://api.github.com/repos/huggingface/datasets/issues/3925/events
https://github.com/huggingface/datasets/pull/3925
1,169,913,769
PR_kwDODunzps40eaq8
3,925
Fix main_classes docs index
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hmm it's still not good \r\n![image](https://user-images.githubusercontent.com/42851186/158429361-e19ce25b-c259-4ded-8473-075deafdbb96.png)\r\n\r\nany idea what could cause this ?", "Ok fixed :)" ]
"2022-03-15T16:33:46"
"2022-03-22T13:49:11"
"2022-03-22T13:44:04"
MEMBER
null
Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types ![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3925/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3925", "html_url": "https://github.com/huggingface/datasets/pull/3925", "diff_url": "https://github.com/huggingface/datasets/pull/3925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3925.patch", "merged_at": "2022-03-22T13:44:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/3924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3924/comments
https://api.github.com/repos/huggingface/datasets/issues/3924/events
https://github.com/huggingface/datasets/pull/3924
1,169,805,813
PR_kwDODunzps40eED5
3,924
Document cases for github datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.", "Yay!" ]
"2022-03-15T15:10:10"
"2022-04-05T18:33:15"
"2022-03-15T15:41:23"
MEMBER
null
In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases. I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github: - when you need the dataset to be reviewed - when you need long-term maintenance from the HF team - when there’s no clear org name / namespace that you can put the dataset under
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3924/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3924", "html_url": "https://github.com/huggingface/datasets/pull/3924", "diff_url": "https://github.com/huggingface/datasets/pull/3924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3924.patch", "merged_at": "2022-03-15T15:41:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/3923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3923/comments
https://api.github.com/repos/huggingface/datasets/issues/3923/events
https://github.com/huggingface/datasets/pull/3923
1,169,773,869
PR_kwDODunzps40d9YU
3,923
Add methods to IterableDatasetDict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3923). All of your documentation changes will be reflected on that endpoint.", "Is this feature stale or needs any help to it ? If so I can quickly send a PR. Thanks\r\n\r\nCC : @lhoestq, @albertvillanova ", "These features have been merged and are already available, thanks :)", "Hello @lhoestq, I see that `IterableDataset` doesn't allow features like `take`, `len`, `slice` which can enable a lot of stuffs. Is it worth an addition ? Or is it intended that they didn't have those features ?", "IterableDataset objects don't have `len` or `slice` because they can be possibly unbounded (you don't know in advance how many items they contain). Though IterableDataset.take and IterableDataset.skip do exist." ]
"2022-03-15T14:46:03"
"2022-07-06T15:40:20"
"2022-03-15T16:45:06"
MEMBER
null
Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict: - map - filter - shuffle - with_format - cast - cast_column - remove_columns - rename_column - rename_columns
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3923/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3923", "html_url": "https://github.com/huggingface/datasets/pull/3923", "diff_url": "https://github.com/huggingface/datasets/pull/3923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3923.patch", "merged_at": "2022-03-15T16:45:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/3922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3922/comments
https://api.github.com/repos/huggingface/datasets/issues/3922/events
https://github.com/huggingface/datasets/pull/3922
1,169,761,293
PR_kwDODunzps40d6vm
3,922
Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3922). All of your documentation changes will be reflected on that endpoint.", "Unrelated CI test failure. This PR can be merged." ]
"2022-03-15T14:36:28"
"2022-03-15T16:07:04"
"2022-03-15T16:07:03"
MEMBER
null
Fix #2957
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3922/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3922", "html_url": "https://github.com/huggingface/datasets/pull/3922", "diff_url": "https://github.com/huggingface/datasets/pull/3922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3922.patch", "merged_at": "2022-03-15T16:07:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/3921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3921/comments
https://api.github.com/repos/huggingface/datasets/issues/3921/events
https://github.com/huggingface/datasets/pull/3921
1,169,749,338
PR_kwDODunzps40d4Mk
3,921
Fix NonMatchingChecksumError in CRD3 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3921). All of your documentation changes will be reflected on that endpoint.", "Unrelated test failure. This PR can be merged." ]
"2022-03-15T14:27:14"
"2022-03-15T15:54:27"
"2022-03-15T15:54:26"
MEMBER
null
Fix #3051
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3921/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3921", "html_url": "https://github.com/huggingface/datasets/pull/3921", "diff_url": "https://github.com/huggingface/datasets/pull/3921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3921.patch", "merged_at": "2022-03-15T15:54:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/3920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3920/comments
https://api.github.com/repos/huggingface/datasets/issues/3920/events
https://github.com/huggingface/datasets/issues/3920
1,169,532,807
I_kwDODunzps5FtaeH
3,920
'datasets.features' is not a package
{ "login": "Arij-Aladel", "id": 68355048, "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arij-Aladel", "html_url": "https://github.com/Arij-Aladel", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets", "The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply" ]
"2022-03-15T11:14:23"
"2022-03-16T09:17:12"
"2022-03-16T09:17:12"
NONE
null
@albertvillanova python 3.9 os: ubuntu 20.04 In conda environment torch installed by ```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html``` datasets package is installed by ``` /env/bin/pip install datasets==1.8.0 ``` During runing the code I have this error ``` [6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class [6]<stderr>: return super().find_class(mod_name, name) [6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` precisely this error appears when torch.load('data_file.pt') ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load result = unpickler.load() File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class return super().find_class(mod_name, name) ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` Why I am getting this error?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3920/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3919/comments
https://api.github.com/repos/huggingface/datasets/issues/3919/events
https://github.com/huggingface/datasets/issues/3919
1,169,497,210
I_kwDODunzps5FtRx6
3,919
AttributeError: 'DatasetDict' object has no attribute 'features'
{ "login": "jswapnil10", "id": 48145785, "node_id": "MDQ6VXNlcjQ4MTQ1Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/48145785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jswapnil10", "html_url": "https://github.com/jswapnil10", "followers_url": "https://api.github.com/users/jswapnil10/followers", "following_url": "https://api.github.com/users/jswapnil10/following{/other_user}", "gists_url": "https://api.github.com/users/jswapnil10/gists{/gist_id}", "starred_url": "https://api.github.com/users/jswapnil10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jswapnil10/subscriptions", "organizations_url": "https://api.github.com/users/jswapnil10/orgs", "repos_url": "https://api.github.com/users/jswapnil10/repos", "events_url": "https://api.github.com/users/jswapnil10/events{/privacy}", "received_events_url": "https://api.github.com/users/jswapnil10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`. \r\n\r\nFor example \r\n\r\n```python \r\nds = load_dataset('mnist')\r\nds.features\r\n```\r\nReturns \r\n```python\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()\r\n----> 1 ds.features\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nIf we look at the dataset variable, we see it is a `DatasetDict`:\r\n\r\n```python \r\nprint(ds)\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 60000\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nWe can grab the features from a split by indexing into `train`:\r\n```python\r\nds['train'].features\r\n{'image': Image(decode=True, id=None),\r\n 'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}\r\n```\r\n\r\nHope that helps ", "Yes, Thanks for that clarification," ]
"2022-03-15T10:46:59"
"2022-03-17T04:16:14"
"2022-03-17T04:16:14"
NONE
null
## Describe the bug Receiving the error when trying to check for Dataset features ## Steps to reproduce the bug from datasets import Dataset dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']]) dataset.features ## Expected results A clear and concise description of the expected results. ## Actual results Getting the following errror AttributeError: 'DatasetDict' object has no attribute 'features' ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.18.4 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3919/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3919/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3918/comments
https://api.github.com/repos/huggingface/datasets/issues/3918/events
https://github.com/huggingface/datasets/issues/3918
1,169,366,117
I_kwDODunzps5Fsxxl
3,918
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
{ "login": "willowdong", "id": 51409295, "node_id": "MDQ6VXNlcjUxNDA5Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/51409295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willowdong", "html_url": "https://github.com/willowdong", "followers_url": "https://api.github.com/users/willowdong/followers", "following_url": "https://api.github.com/users/willowdong/following{/other_user}", "gists_url": "https://api.github.com/users/willowdong/gists{/gist_id}", "starred_url": "https://api.github.com/users/willowdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willowdong/subscriptions", "organizations_url": "https://api.github.com/users/willowdong/orgs", "repos_url": "https://api.github.com/users/willowdong/repos", "events_url": "https://api.github.com/users/willowdong/events{/privacy}", "received_events_url": "https://api.github.com/users/willowdong/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "You should force redownload:\r\n```python\r\ndataset = load_dataset(\"multi_news\", download_mode=\"force_redownload\")\r\ndataset_2 = load_dataset(\"reddit_tifu\", \"long\", download_mode=\"force_redownload\")", "Fixed by:\r\n- #3787 \r\n- #3843" ]
"2022-03-15T08:53:45"
"2022-03-16T15:36:58"
"2022-03-15T14:01:25"
NONE
null
## Describe the bug Can't load the dataset ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('multi_news') dataset_2=load_dataset("reddit_tifu", "long") ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info - `datasets` version: 1.18.4 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.0 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3918/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3918/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3917/comments
https://api.github.com/repos/huggingface/datasets/issues/3917/events
https://github.com/huggingface/datasets/pull/3917
1,168,906,154
PR_kwDODunzps40bGZA
3,917
Create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3917). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-14T21:08:10"
"2022-03-17T17:45:39"
"2022-03-17T17:45:39"
NONE
null
This follows the same structure as the GLUE metric card, hope that works for everyone :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3917/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3917", "html_url": "https://github.com/huggingface/datasets/pull/3917", "diff_url": "https://github.com/huggingface/datasets/pull/3917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3917.patch", "merged_at": "2022-03-17T17:45:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/3916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3916/comments
https://api.github.com/repos/huggingface/datasets/issues/3916/events
https://github.com/huggingface/datasets/pull/3916
1,168,869,191
PR_kwDODunzps40a-cR
3,916
Create README.md for GLUE
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3916). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-14T20:27:22"
"2022-03-15T17:06:57"
"2022-03-15T17:06:56"
NONE
null
I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify. Also tagging @yjernite for the Limitations section. Happy to hear your thoughts!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3916/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3916", "html_url": "https://github.com/huggingface/datasets/pull/3916", "diff_url": "https://github.com/huggingface/datasets/pull/3916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3916.patch", "merged_at": "2022-03-15T17:06:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/3915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3915/comments
https://api.github.com/repos/huggingface/datasets/issues/3915/events
https://github.com/huggingface/datasets/pull/3915
1,168,848,101
PR_kwDODunzps40a54e
3,915
Metric card template
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances inputs `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference in the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n", "Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference to the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n", "Thanks for your feedback, @mcmillanmajora ! I totally agree that we should write a post -- we were going to write one up when we are done with a good chunk of the metric cards, but we can also do that earlier :smile: \r\n\r\nWith regards to your more specific comments:\r\n\r\n- It is our intention to put what the metric was developed for (whether it is a specific task or dataset, for example). You can see the [WER](https://github.com/huggingface/datasets/tree/master/metrics/wer) metric card for that.\r\n- `input_field` works for me!\r\n- the values aren't always scores, it's more like the values the metric can take. And it does include the range of possible values, including the max and min, that are outputted.\r\n- I like the suggestion to add: 'Provide a range of examples that show both typical and atypical results' :hugs: \r\n- I have been putting specific use cases in 'Further references', just because there isn't always something to put there, especially for less popular metrics", "Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! ", "Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! " ]
"2022-03-14T20:07:08"
"2022-05-04T10:44:09"
"2022-05-04T10:37:06"
CONTRIBUTOR
null
Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!). All feedback is welcome, but am especially curious about feedback in terms of: - things that should be included but aren't - things that are included but should be changed or removed - the instructions I included, and whether they should be added to, clarified, or deleted altogether
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3915/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3915", "html_url": "https://github.com/huggingface/datasets/pull/3915", "diff_url": "https://github.com/huggingface/datasets/pull/3915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3915.patch", "merged_at": "2022-05-04T10:37:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/3914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3914/comments
https://api.github.com/repos/huggingface/datasets/issues/3914/events
https://github.com/huggingface/datasets/pull/3914
1,168,777,880
PR_kwDODunzps40aq2r
3,914
Use templates for doc-builidng jobs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3914). All of your documentation changes will be reflected on that endpoint.", "You can ignore the CI failures btw, they're unrelated to this PR" ]
"2022-03-14T18:53:06"
"2022-03-17T15:02:59"
"2022-03-17T15:02:58"
CONTRIBUTOR
null
This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-) Note: all libraries must share the same docker image for those doc-building jobs. For now, all the one used (`huggingface/transformers-doc-builder`) contains all extra steps of the datasets install for docbuling (mainly libsndfile) but if in the future some additional steps are necessary on top of `pip install -e .[dev]`, this docker image will need to be updated with the extra deps.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3914/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3914", "html_url": "https://github.com/huggingface/datasets/pull/3914", "diff_url": "https://github.com/huggingface/datasets/pull/3914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3914.patch", "merged_at": "2022-03-17T15:02:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/3913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3913/comments
https://api.github.com/repos/huggingface/datasets/issues/3913/events
https://github.com/huggingface/datasets/pull/3913
1,168,723,950
PR_kwDODunzps40afYJ
3,913
Deterministic split order in DatasetDict.map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3913). All of your documentation changes will be reflected on that endpoint.", "I'm surprised this is needed because the order of the `dict` keys is deterministic as of Python 3.6 (documented in 3.7). Is there a reproducer for this behavior? I wouldn't make this change unless it's absolutely needed because `sorted` modifies the initial order of the keys.", "Indeed this doesn't fix the issue apparently. Actually this is probably because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer)." ]
"2022-03-14T17:58:37"
"2023-09-24T09:55:10"
"2022-03-15T10:45:15"
MEMBER
null
The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed Close https://github.com/huggingface/datasets/issues/3847
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3913/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3913", "html_url": "https://github.com/huggingface/datasets/pull/3913", "diff_url": "https://github.com/huggingface/datasets/pull/3913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3913.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3912/comments
https://api.github.com/repos/huggingface/datasets/issues/3912/events
https://github.com/huggingface/datasets/pull/3912
1,168,720,098
PR_kwDODunzps40aekr
3,912
add draft of registering function for pandas
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3912). All of your documentation changes will be reflected on that endpoint.", "That's cool ! Though I would expect such an integration to only require `huggingface_hub`, not the full `datasets` library. \r\n Indeed if users want to use the `datasets` lib they could just to `Dataset.from_pandas(df).push_to_hub()` already. Therefore I would explore something that doesn't not necessarily requires `datasets`.\r\n\r\nFor other could storage solutions (S3, GCS, etc.), pandas allows users to pass URIs like `s3://bucket-name/path/data.csv` to the `read_xxx` and `to_xxx` (for csv, parquet, json, etc). It also support passing the **root directory** like `s3://bucket-name/dataset-dir` instead of a single file name.\r\n\r\nIn the Hugging Face Hub case, we have one dataset = one repository. We can enter pandas' paradigm by saying one dataset = one repository = one root directory. Here is what we could have:\r\n\r\n### push to Hub:\r\n```python\r\n\"\"\"\r\nDemo script for writing a pandas data frame to a CSV file on HF using fsspec-supported pandas APIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\nbooks_df = pd.DataFrame(\r\n data={\"Title\": [\"Book I\", \"Book II\", \"Book III\"], \"Price\": [56.6, 59.87, 74.54]},\r\n columns=[\"Title\", \"Price\"],\r\n)\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df.to_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n index=False,\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\n```\r\n\r\n### load from Hub:\r\n```python\r\n\"\"\"\r\nDemo script for reading a CSV file from HF into a pandas data frame using fsspec-supported pandas\r\nAPIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df = pd.read_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\nprint(books_df)\r\n```\r\n\r\nAnd you could do the same with Parquet data using `read/to_parquet` or other formats. Formats like CSV, Parquet or JSON Lines would work out of the box with `datasets`. This API would also allow anyone to use Dask with the Hugging Face Hub for example.\r\n\r\nWhat do you think ?", "I'm closing this PR as [`hffs`](https://github.com/huggingface/hffs) can now be used for reading/writing data frames from/to the Hub." ]
"2022-03-14T17:54:29"
"2023-09-24T09:55:01"
"2023-01-24T12:57:10"
MEMBER
null
This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub. Here is an example: ```python import pandas as pd from datasets import register_pandas register_pandas() # push to hub df = pd.DataFrame.from_dict({"test": [1,2,3]}) df.push_to_hub("my_test") # load from hub df_retrieved = pd.DataFrame.load_from_hub("lvwerra/my_test") ``` It follows a similar philosophy as the `tqdm` [integration](https://github.com/tqdm/tqdm#pandas-integration). Also see [this issue](https://github.com/pandas-dev/pandas/issues/46000) on the `pandas` repository. This is just a rough draft of what such integration could look like but I would like appreciate some feedback on this: is this something you would like to add the library and is this the way to go? cc @lhoestq @albertvillanova @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3912/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3912", "html_url": "https://github.com/huggingface/datasets/pull/3912", "diff_url": "https://github.com/huggingface/datasets/pull/3912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3912.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3911/comments
https://api.github.com/repos/huggingface/datasets/issues/3911/events
https://github.com/huggingface/datasets/pull/3911
1,168,652,374
PR_kwDODunzps40aQHz
3,911
Create README.md for CER metric
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-14T16:54:51"
"2022-03-17T17:49:40"
"2022-03-17T17:45:54"
NONE
null
Initial proposal for a CER metric card cc @patrickvonplaten - wdyt this time around? :smile:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3911/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3911", "html_url": "https://github.com/huggingface/datasets/pull/3911", "diff_url": "https://github.com/huggingface/datasets/pull/3911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3911.patch", "merged_at": "2022-03-17T17:45:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/3910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3910/comments
https://api.github.com/repos/huggingface/datasets/issues/3910/events
https://github.com/huggingface/datasets/pull/3910
1,168,579,694
PR_kwDODunzps40aAiX
3,910
Fix text loader to split only on universal newlines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3910). All of your documentation changes will be reflected on that endpoint.", "Looks like the test needs to be updated for windows ^^'", "I don't think this is the same issue as in https://github.com/oscar-corpus/corpus/issues/18, where the OSCAR metadata has line offsets that use only `\\n` as the newline marker to count lines, not `\\r\\n` or `\\r`.\r\n\r\nIt looks like the OSCAR data loader is opening the data files with `gzip.open` directly and I don't think this text loader is used, but I'm not familiar with a lot of `datasets` internals so I could be mistaken?", "You are right @adrianeboyd.\r\n\r\nThis PR fixes #3729.\r\n\r\nAdditionally, this PR is somehow related to the OSCAR issue. However, the OSCAR issue have multiple root causes: one is the offset initialization (as you pointed out); other is similar to this case: Unicode newlines are not properly handled.\r\n\r\nI will make a change proposal for OSCAR this afternoon.", "@lhoestq I'm working on fixing the Windows tests on my Windows machine...", "I finally changed the approach in order to avoid having \"\\r\\n\" and \"\\r\" line breaks in Python `str` read from files on Windows/old Macintosh machines." ]
"2022-03-14T15:54:58"
"2022-03-15T16:16:11"
"2022-03-15T16:16:09"
MEMBER
null
Currently, `text` loader breaks on a superset of universal newlines, which also contains Unicode line boundaries. See: https://docs.python.org/3/library/stdtypes.html#str.splitlines However, the expected behavior is to get the lines splitted only on universal newlines: "\n", "\r\n" and "\r". See: oscar-corpus/corpus#18 Fix #3729.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3910/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3910/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3910", "html_url": "https://github.com/huggingface/datasets/pull/3910", "diff_url": "https://github.com/huggingface/datasets/pull/3910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3910.patch", "merged_at": "2022-03-15T16:16:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/3909
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3909/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3909/comments
https://api.github.com/repos/huggingface/datasets/issues/3909/events
https://github.com/huggingface/datasets/issues/3909
1,168,578,058
I_kwDODunzps5FpxYK
3,909
Error loading file audio when downloading the Common Voice dataset directly from the Hub
{ "login": "aliceinland", "id": 30385910, "node_id": "MDQ6VXNlcjMwMzg1OTEw", "avatar_url": "https://avatars.githubusercontent.com/u/30385910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aliceinland", "html_url": "https://github.com/aliceinland", "followers_url": "https://api.github.com/users/aliceinland/followers", "following_url": "https://api.github.com/users/aliceinland/following{/other_user}", "gists_url": "https://api.github.com/users/aliceinland/gists{/gist_id}", "starred_url": "https://api.github.com/users/aliceinland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aliceinland/subscriptions", "organizations_url": "https://api.github.com/users/aliceinland/orgs", "repos_url": "https://api.github.com/users/aliceinland/repos", "events_url": "https://api.github.com/users/aliceinland/events{/privacy}", "received_events_url": "https://api.github.com/users/aliceinland/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ?", "I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.\r\n\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor\r\nimport soundfile as sf\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_array(batch):\r\n speech, _ = sf.read(batch[\"file\"])\r\n batch[\"speech\"] = speech\r\n return batch\r\n\r\nlibrispeech_eval = librispeech_eval.map(map_to_array)\r\n\r\ndef map_to_pred(batch):\r\n features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=[\"speech\"])\r\n\r\nprint(\"WER:\", wer(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```\r\n\r\nThe code is taken directly from \"https://huggingface.co/facebook/s2t-small-librispeech-asr\".\r\n\r\nThe short error code is \"RuntimeError: Error opening '6930-75918-0000.flac': System error.\" (it can't find the first file), and I agree, I can't find the file either. The dataset has downloaded correctly (it says), but on the location, there are only \".arrow\" files, no \".flac\" files.\r\n\r\n**Error message:**\r\n\r\n```python\r\nRuntimeError Traceback (most recent call last)\r\nInput In [15], in <cell line: 16>()\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n---> 16 librispeech_eval = librispeech_eval.map(map_to_array)\r\n 18 def map_to_pred(batch):\r\n 19 features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1953, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1950 disable_tqdm = not logging.is_progress_bar_enabled()\r\n 1952 if num_proc is None or num_proc == 1:\r\n-> 1953 return self._map_single(\r\n 1954 function=function,\r\n 1955 with_indices=with_indices,\r\n 1956 with_rank=with_rank,\r\n 1957 input_columns=input_columns,\r\n 1958 batched=batched,\r\n 1959 batch_size=batch_size,\r\n 1960 drop_last_batch=drop_last_batch,\r\n 1961 remove_columns=remove_columns,\r\n 1962 keep_in_memory=keep_in_memory,\r\n 1963 load_from_cache_file=load_from_cache_file,\r\n 1964 cache_file_name=cache_file_name,\r\n 1965 writer_batch_size=writer_batch_size,\r\n 1966 features=features,\r\n 1967 disable_nullable=disable_nullable,\r\n 1968 fn_kwargs=fn_kwargs,\r\n 1969 new_fingerprint=new_fingerprint,\r\n 1970 disable_tqdm=disable_tqdm,\r\n 1971 desc=desc,\r\n 1972 )\r\n 1973 else:\r\n 1975 def format_cache_file_name(cache_file_name, rank):\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:519, in transmit_tasks.<locals>.wrapper(*args, **kwargs)\r\n 517 self: \"Dataset\" = kwargs.pop(\"self\")\r\n 518 # apply actual function\r\n--> 519 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 520 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 521 for dataset in datasets:\r\n 522 # Remove task templates if a column mapping of the template is no longer valid\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:486, in transmit_format.<locals>.wrapper(*args, **kwargs)\r\n 479 self_format = {\r\n 480 \"type\": self._format_type,\r\n 481 \"format_kwargs\": self._format_kwargs,\r\n 482 \"columns\": self._format_columns,\r\n 483 \"output_all_columns\": self._output_all_columns,\r\n 484 }\r\n 485 # apply actual function\r\n--> 486 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 487 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 488 # re-apply format to the output\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)\r\n 452 kwargs[fingerprint_name] = update_fingerprint(\r\n 453 self._fingerprint, transform, kwargs_for_fingerprint\r\n 454 )\r\n 456 # Call actual function\r\n--> 458 out = func(self, *args, **kwargs)\r\n 460 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n 462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2318, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)\r\n 2316 if not batched:\r\n 2317 for i, example in enumerate(pbar):\r\n-> 2318 example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n 2319 if update_data:\r\n 2320 if i == 0:\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2218, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)\r\n 2216 if with_rank:\r\n 2217 additional_args += (rank,)\r\n-> 2218 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n 2219 if update_data is None:\r\n 2220 # Check if the function returns updated examples\r\n 2221 update_data = isinstance(processed_inputs, (Mapping, pa.Table))\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1913, in Dataset.map.<locals>.decorate.<locals>.decorated(item, *args, **kwargs)\r\n 1909 decorated_item = (\r\n 1910 Example(item, features=self.features) if not batched else Batch(item, features=self.features)\r\n 1911 )\r\n 1912 # Use the LazyDict internally, while mapping the function\r\n-> 1913 result = f(decorated_item, *args, **kwargs)\r\n 1914 # Return a standard dict\r\n 1915 return result.data if isinstance(result, LazyDict) else result\r\n\r\nInput In [15], in map_to_array(batch)\r\n 11 def map_to_array(batch):\r\n---> 12 speech, _ = sf.read(batch[\"file\"])\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:256, in read(file, frames, start, stop, dtype, always_2d, fill_value, out, samplerate, channels, format, subtype, endian, closefd)\r\n 170 def read(file, frames=-1, start=0, stop=None, dtype='float64', always_2d=False,\r\n 171 fill_value=None, out=None, samplerate=None, channels=None,\r\n 172 format=None, subtype=None, endian=None, closefd=True):\r\n 173 \"\"\"Provide audio data from a sound file as NumPy array.\r\n 174 \r\n 175 By default, the whole file is read from the beginning, but the\r\n (...)\r\n 254 \r\n 255 \"\"\"\r\n--> 256 with SoundFile(file, 'r', samplerate, channels,\r\n 257 subtype, endian, format, closefd) as f:\r\n 258 frames = f._prepare_read(start, stop, frames)\r\n 259 data = f.read(frames, dtype, always_2d, fill_value, out)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)\r\n 626 self._mode = mode\r\n 627 self._info = _create_info_struct(file, mode, samplerate, channels,\r\n 628 format, subtype, endian)\r\n--> 629 self._file = self._open(file, mode_int, closefd)\r\n 630 if set(mode).issuperset('r+') and self.seekable():\r\n 631 # Move write position to 0 (like in Python file objects)\r\n 632 self.seek(0)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1183, in SoundFile._open(self, file, mode_int, closefd)\r\n 1181 else:\r\n 1182 raise TypeError(\"Invalid file: {0!r}\".format(self.name))\r\n-> 1183 _error_check(_snd.sf_error(file_ptr),\r\n 1184 \"Error opening {0!r}: \".format(self.name))\r\n 1185 if mode_int == _snd.SFM_WRITE:\r\n 1186 # Due to a bug in libsndfile version <= 1.0.25, frames != 0\r\n 1187 # when opening a named pipe in SFM_WRITE mode.\r\n 1188 # See http://github.com/erikd/libsndfile/issues/77.\r\n 1189 self._info.frames = 0\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1357, in _error_check(err, prefix)\r\n 1355 if err != 0:\r\n 1356 err_str = _snd.sf_error_number(err)\r\n-> 1357 raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))\r\n\r\nRuntimeError: Error opening '6930-75918-0000.flac': System error.\r\n```\r\n\r\n**Package versions:**\r\n```python\r\npython: 3.9\r\ntransformers: 4.17.0\r\ndatasets: 2.0.0\r\nSoundFile: 0.10.3.post1\r\n```\r\n", "Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0][\"audio\"][\"array\"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)\r\n\r\ncc @patrickvonplaten we will need to update the readme at [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) as well as https://huggingface.co/docs/transformers/model_doc/speech_to_text", "Thanks!\r\n\r\nAnd sorry for posting this problem in what turned on to be an unrelated thread.\r\n\r\nI rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.\r\n\r\nThe rewritten code:\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor, Wav2Vec2Processor\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_pred(batch):\r\n audio = batch[\"audio\"]\r\n features = processor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"], padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred)#, batched=True, batch_size=8)\r\n\r\nprint(\"WER:\", wer.compute(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```", "I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for \"transcription\". You can fix it by adding `[0]` at the end of this line to get the string:\r\n```python\r\nbatch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]\r\n```", "Updating as many model cards now as I can find", "https://github.com/huggingface/transformers/pull/16611", "We no longer use `torchaudio` for decoding MP3 files, and the problem with model cards has been addressed, so I'm closing this issue." ]
"2022-03-14T15:53:50"
"2023-03-02T15:31:27"
"2023-03-02T15:31:26"
NONE
null
## Describe the bug When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened. ## Steps to reproduce the bug ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "it", split="test") #test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'}) wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β€œ\'\οΏ½]' resampler = torchaudio.transforms.Resample(48_000, 16_000) ``` ## Expected results The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library. ## Actual results The error is: ```python 0ex [00:00, ?ex/s] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-48-ef87f4129e6e> in <module> 7 return batch 8 ----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn) /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2107 2108 if num_proc is None or num_proc == 1: -> 2109 return self._map_single( 2110 function=function, 2111 with_indices=with_indices, /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 516 self: "Dataset" = kwargs.pop("self") 517 # apply actual function --> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 520 for dataset in datasets: /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 483 } 484 # apply actual function --> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 487 # re-apply format to the output /opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2465 if not batched: 2466 for i, example in enumerate(pbar): -> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset) 2468 if update_data: 2469 if i == 0: /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 2372 if with_rank: 2373 additional_args += (rank,) -> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 2375 if update_data is None: 2376 # Check if the function returns updated examples /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs) 2067 ) 2068 # Use the LazyDict internally, while mapping the function -> 2069 result = f(decorated_item, *args, **kwargs) 2070 # Return a standard dict 2071 return result.data if isinstance(result, LazyDict) else result <ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch) 3 def speech_file_to_array_fn(batch): 4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() ----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"]) 6 batch["speech"] = resampler(speech_array).squeeze().numpy() 7 return batch /opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ``` ## Environment info - `datasets` version: 1.18.4 - Platform: Linux-5.4.0-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3909/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3909/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3908/comments
https://api.github.com/repos/huggingface/datasets/issues/3908/events
https://github.com/huggingface/datasets/pull/3908
1,168,576,963
PR_kwDODunzps40Z_9F
3,908
Update README.md for SQuAD v2 metric
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3908). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-14T15:53:10"
"2022-03-15T17:04:11"
"2022-03-15T17:04:11"
NONE
null
Putting "Values from popular papers" as a subsection of "Output values"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3908/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3908/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3908", "html_url": "https://github.com/huggingface/datasets/pull/3908", "diff_url": "https://github.com/huggingface/datasets/pull/3908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3908.patch", "merged_at": "2022-03-15T17:04:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/3907
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3907/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3907/comments
https://api.github.com/repos/huggingface/datasets/issues/3907/events
https://github.com/huggingface/datasets/pull/3907
1,168,575,998
PR_kwDODunzps40Z_vd
3,907
Update README.md for SQuAD metric
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3907). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-14T15:52:31"
"2022-03-15T17:04:20"
"2022-03-15T17:04:19"
NONE
null
Putting "Values from popular papers" as a subsection of "Output values"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3907/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3907/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3907", "html_url": "https://github.com/huggingface/datasets/pull/3907", "diff_url": "https://github.com/huggingface/datasets/pull/3907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3907.patch", "merged_at": "2022-03-15T17:04:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/3906
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3906/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3906/comments
https://api.github.com/repos/huggingface/datasets/issues/3906/events
https://github.com/huggingface/datasets/issues/3906
1,168,496,328
I_kwDODunzps5FpdbI
3,906
NonMatchingChecksumError on Spider dataset
{ "login": "kolk", "id": 9049591, "node_id": "MDQ6VXNlcjkwNDk1OTE=", "avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kolk", "html_url": "https://github.com/kolk", "followers_url": "https://api.github.com/users/kolk/followers", "following_url": "https://api.github.com/users/kolk/following{/other_user}", "gists_url": "https://api.github.com/users/kolk/gists{/gist_id}", "starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kolk/subscriptions", "organizations_url": "https://api.github.com/users/kolk/orgs", "repos_url": "https://api.github.com/users/kolk/repos", "events_url": "https://api.github.com/users/kolk/events{/privacy}", "received_events_url": "https://api.github.com/users/kolk/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @kolk, thanks for reporting.\r\n\r\nIndeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:\r\n- #3787 \r\n\r\nWe just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4\r\n\r\nPlease, feel free to update your local `datasets` version, so that you get the fix:\r\n```shell\r\npip install -U datasets\r\n```" ]
"2022-03-14T14:54:53"
"2022-03-15T07:09:51"
"2022-03-15T07:09:51"
NONE
null
## Describe the bug Failure to generate dataset ```spider``` because of checksums error for dataset source files. ## Steps to reproduce the bug ``` from datasets import load_dataset spider = load_dataset("spider") ``` ## Expected results Checksums should match for files from url ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ## Actual results ``` >>> load_dataset("spider") load_dataset("spider") Downloading and preparing dataset spider/spider (download: 95.12 MiB, generated: 5.17 MiB, post-processed: Unknown size, total: 100.29 MiB) to /home/user/.cache/huggingface/datasets/spider/spider/1.0.0/79778ebea87c59b19411f1eb3eda317e9dd5f7788a556d837ef25c3ae6e5e8b7... Traceback (most recent call last): File "/home/user/py3_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-5-d4cb54197348>", line 1, in <module> load_dataset("spider") File "/home/user/py3_env/lib/python3.8/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 665, in _download_and_prepare verify_checksums( File "/home/user/py3_env/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ``` ## Environment info datasets version: 1.18.3 Platform: Ubuntu 20 LTS Python version: 3.8.10 PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3906/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3906/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3905
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3905/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3905/comments
https://api.github.com/repos/huggingface/datasets/issues/3905/events
https://github.com/huggingface/datasets/pull/3905
1,168,320,568
PR_kwDODunzps40ZJQJ
3,905
Perplexity Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.", "I'm wondering if we should add that perplexity can be used for analyzing datasets as well", "Otherwise, looks good! Good job, @emibaylor !" ]
"2022-03-14T12:39:40"
"2022-03-16T19:38:56"
"2022-03-16T19:38:56"
CONTRIBUTOR
null
Add Perplexity metric card Note that it is currently still missing the citation, but I plan to add it later today.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3905/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3905/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3905", "html_url": "https://github.com/huggingface/datasets/pull/3905", "diff_url": "https://github.com/huggingface/datasets/pull/3905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3905.patch", "merged_at": "2022-03-16T19:38:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/3904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3904/comments
https://api.github.com/repos/huggingface/datasets/issues/3904/events
https://github.com/huggingface/datasets/issues/3904
1,167,730,095
I_kwDODunzps5FmiWv
3,904
CONLL2003 Dataset not available
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @omarespejel.\r\n\r\nI'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip\r\n\r\nMight it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?\r\nCould you please try loading the dataset again and tell if the problem persists?", "@omarespejel I'm closing this issue. Feel free to reopen it if the problem persists.", "getting same issue. Can't find any solution.", "I am getting the same issue. I use google colab with CPU.\r\nThe code I used is exactly the same as described above.\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"conll2003\")\r\n```\r\n\r\nThe produced error:\r\n![image](https://github.com/huggingface/datasets/assets/9371628/d87f7fb0-ef58-4755-abb5-f8f92c51fe02)\r\n\r\nNote: This error is different from what was initially described in this thread. This is because I use CPU. When I use GPU I reproduce the same initial error of the thread.\r\n\r\nMoreover, I receive the following warning:\r\n```\r\nWARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}\r\nDownloading and preparing dataset conll2003/conll2003 to /root/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/9a4d16a94f8674ba3466315300359b0acd891b68b6c8743ddf60b9c702adce98...\r\nWARNING:urllib3.connection:Certificate did not match expected hostname: data.deepai.org. Certificate: {'subject': ((('commonName', '*.b-cdn.net'),),), 'issuer': ((('countryName', 'GB'),), (('stateOrProvinceName', 'Greater Manchester'),), (('localityName', 'Salford'),), (('organizationName', 'Sectigo Limited'),), (('commonName', 'Sectigo RSA Domain Validation Secure Server CA'),)), 'version': 3, 'serialNumber': 'DDED48B13E1EA03983E833AB2C35EF07', 'notBefore': 'Nov 7 00:00:00 2022 GMT', 'notAfter': 'Nov 11 23:59:59 2023 GMT', 'subjectAltName': (('DNS', '*.b-cdn.net'), ('DNS', 'b-cdn.net')), 'OCSP': ('http://ocsp.sectigo.com/',), 'caIssuers': ('http://crt.sectigo.com/SectigoRSADomainValidationSecureServerCA.crt',)}\r\n```\r\n" ]
"2022-03-13T23:46:15"
"2023-06-28T18:08:16"
"2022-03-17T08:21:32"
NONE
null
## Describe the bug [CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip' ![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png) ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("conll2003") ``` ## Expected results Download the conll2003 dataset. ## Actual results Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3904/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3903
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3903/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3903/comments
https://api.github.com/repos/huggingface/datasets/issues/3903/events
https://github.com/huggingface/datasets/pull/3903
1,167,521,627
PR_kwDODunzps40WnkI
3,903
Add Biwi Kinect Head Pose dataset.
{ "login": "dnaveenr", "id": 17746528, "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dnaveenr", "html_url": "https://github.com/dnaveenr", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "repos_url": "https://api.github.com/users/dnaveenr/repos", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the detailed explanation of the structure!\r\n\r\n1. IMO it makes the most sense to yield one example for each person (so the total of 24 examples), so the features dict should be similar to this:\r\n \r\n ```python\r\n features = Features({\r\n \"rgb\": Sequence(Image()), # for the png frames\r\n \"rgb_cal\": {\"intrisic_mat\": Array2D(shape=(3, 3), dtype=\"float32\"), \"extrinsic_mat\": {\"rotation\": Array2D(shape=(3, 3), dtype=\"float32\"), \"translation\": Sequence(Value(\"float32\", length=3)}},\r\n \"depth\": Sequence(Value(\"string\")), # for the depth frames\r\n \"depth_cal\": the same as \"rgb_cal\",\r\n \"head_pose_gt\": Sequence({\"center\": Sequence(Value(\"float32\", length=3), \"rotation\": Array2D(shape=(3, 3), dtype=\"float32\")}),\r\n \"head_template\": Value(\"string\"), # for the person's obj file\r\n\r\n })\r\n ```\r\n We can add a \"Data Processing\" section to the card to explain how to parse the files.\r\n\r\n\r\n2. Yes, it's ok to parse the files as long as it doesn't take too much time/memory (e.g., it's ok to parse the `*_pose.txt` or `*.cal` files, but it's better to leave the `*_depth.bin` or `*.obj` files unprocessed and yield the paths to them)", "Thanks for the suggestions @mariosasko, yielding one example for each person would make things much easier.\r\nOkay. I'll look at parsing the files and then displaying the information.", "Added the following : \r\n- Features, I have included sequence_number and subject_id along with the features you had suggested.\r\n- Tested loading of the dataset along with dummy_data and full_data tests.\r\n- Created the dataset_infos.json file.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Cards with more details.\r\n- [x] \"Data Processing\" section\r\n\r\nAny inputs on what to include in the \"Data Processing\" section ?\r\n", "@mariosasko Please could you review this when you get time. Thank you.", "In the Data Processing section, I've added example code for a compressed binary depth image file. Updated the Readme as well. ", "@mariosasko / @lhoestq , Please could you review this when you get time. Thank you.", "Created an issue here: https://github.com/huggingface/datasets/issues/4152", "Got it. Thanks for the comments. I've collapsed the C++ code in the readme and added the suggestions.", "Hi ! The `AttributeError ` bug has been fixed, feel free to merge `master` into your branch ;)", "I haven't been able to figure out why CI is failing, the error shown is : \r\n\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Parsing:\r\nE list index out of range\r\nE The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE list index out of range\r\n```\r\n\r\nAny inputs would be helpful.", "I think it's because there are tabulations in the c++ code, can you replace them with regular spaces please ?\r\n\r\n(then in another PR we can maybe fix the Readme parser to support text indented with tabulations)", "@lhoestq , initially the idea was to have one example = one image with an additional field mentioning the frame_number. But each subject, we had a head template, calibration information for the depth and the color camera which was common to all the examples for that subject. Also, the images were continuous frames.\r\n@mariosasko suggested this structure and it made sense to group the images together for a particular subject.", "> Don't you think it would be more practical to have one example = one image in this dataset ?\r\n\r\nHaving one example = one image would be good but since we have a head template, calibration information for the depth and the color camera which is common to all the images for that subject and the images being continuous frames, I think it makes sense to group the images together for each subject. This will make the feature representation easier.\r\n\r\n", "Ok I see, sounds good then. Users can still separate the images if they want to", "The CI fails are unrelated to this PR and fixed on master, merging !", "Great. Thanks @lhoestq , I think we can close this issue now. ( #3822 )" ]
"2022-03-13T08:59:21"
"2022-05-31T17:02:19"
"2022-05-31T12:15:58"
CONTRIBUTOR
null
This PR adds the Biwi Kinect Head Pose dataset. Dataset Request : Add Biwi Kinect Head Pose Database [#3822](https://github.com/huggingface/datasets/issues/3822) The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6 females and 14 males where 4 people were recorded twice. For each frame, there is : - a depth image, (.bin file) - a corresponding rgb image (both 640x480 pixels), - annotation ( present inside a .txt file) The ground truth is the 3D location of the head and its rotation. The dataset structure is as follows : ``` - 01.obj - 01 - frame_00003_depth.bin - frame_00003_pose.txt - frame_00003_rgb.png . . . - 02.obj - 02 - frame_00003_depth.bin - frame_00003_pose.txt - frame_00003_rgb.png . . . ``` Preview of frame_00003_pose.txt : ``` 0.988397 0.0731349 0.133128 -0.0441539 0.976945 -0.208876 -0.145334 0.200575 0.968838 126.665 40.4515 876.198 ``` I have used the following dataset features : ``` features=datasets.Features( { "person_id": datasets.Value("string"), "frame_number": datasets.Value("string"), "depth_image": datasets.Value("string"), "rgb_image": datasets.Image(), "3D_head_center": datasets.Array2D(shape=(3, 3), dtype="float"), "3D_head_rotation": datasets.Value("float"), } ``` I am giving the path to the depth_image here. I need some inputs for the following : 1. For each person, the dataset has the following additional information : ``` For each sequence, the corresponding .obj file represents a head template deformed to match the neutral face of that specific person. [*.obj file] In each folder, two .cal files contain calibration information for the depth and the color camera, e.g., the intrinsic camera matrix of the depth camera and the global rotation and translation to the rgb camera. ``` Wanted to know how we can represent these features ? 2. For _generate_examples , do I parse the directories and fetch the required information ? This would mean reading the .txt file to obtain the "3D_head_center" and "3D_head_rotation" details. We could precompute the features information and have a metadata file and use the metadata file to yield information in _generate_examples ? Wanted your thoughts for the best approach for this ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3903/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3903/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3903", "html_url": "https://github.com/huggingface/datasets/pull/3903", "diff_url": "https://github.com/huggingface/datasets/pull/3903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3903.patch", "merged_at": "2022-05-31T12:15:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/3902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3902/comments
https://api.github.com/repos/huggingface/datasets/issues/3902/events
https://github.com/huggingface/datasets/issues/3902
1,167,403,377
I_kwDODunzps5FlSlx
3,902
Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils'
{ "login": "arunasank", "id": 3166852, "node_id": "MDQ6VXNlcjMxNjY4NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3166852?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arunasank", "html_url": "https://github.com/arunasank", "followers_url": "https://api.github.com/users/arunasank/followers", "following_url": "https://api.github.com/users/arunasank/following{/other_user}", "gists_url": "https://api.github.com/users/arunasank/gists{/gist_id}", "starred_url": "https://api.github.com/users/arunasank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arunasank/subscriptions", "organizations_url": "https://api.github.com/users/arunasank/orgs", "repos_url": "https://api.github.com/users/arunasank/repos", "events_url": "https://api.github.com/users/arunasank/events{/privacy}", "received_events_url": "https://api.github.com/users/arunasank/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`", "Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).\r\n\r\nIn order to fix this, you should update `fsspec` from within the \"problematic\" Python virtual env:\r\n```\r\npip install -U \"fsspec[http]>=2021.05.0\"", "I'm closing this issue, @arunasank.\r\n\r\nFeel free to re-open it if the problem persists. ", "from lightgbm import LGBMModel,LGBMClassifier, plot_importance\r\nafter importing lib getting (partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) error, can help me", "@deepakmahtha I think you are not using `datasets`: this is the GitHub repository of Hugging Face Datasets.\r\n\r\nIf you are using `lightgbm`, you should report the issue to their repository instead.\r\n\r\nAnyway, we have proposed a possible fix just in a comment above: to update fsspec.\r\nhttps://github.com/huggingface/datasets/issues/3902#issuecomment-1066517824" ]
"2022-03-12T21:22:03"
"2023-02-09T14:53:49"
"2022-03-22T07:10:41"
NONE
null
## Describe the bug Unable to import datasets ## Steps to reproduce the bug ```python from datasets import Dataset, DatasetDict ``` ## Expected results The import works without errors ## Actual results ``` AttributeError Traceback (most recent call last) <ipython-input-37-c8cfcbe62127> in <module> 11 # from tqdm import tqdm 12 # import torch ---> 13 from datasets import Dataset 14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling 15 # from sentence_transformers import SentenceTransformer ~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module> 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module> 46 ) 47 ---> 48 import fsspec 49 import numpy as np 50 import pandas as pd ~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module> 10 from . import _version, caching 11 from .callbacks import Callback ---> 12 from .core import get_fs_token_paths, open, open_files, open_local 13 from .exceptions import FSTimeoutError 14 from .mapping import FSMap, get_mapper ~/.local/lib/python3.8/site-packages/fsspec/core.py in <module> 16 caches, 17 ) ---> 18 from .compression import compr 19 from .registry import filesystem, get_filesystem_class 20 from .utils import ( ~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module> 68 69 ---> 70 register_compression("zip", unzip, "zip") 71 register_compression("bz2", BZ2File, "bz2") 72 ~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force) 44 45 for ext in extensions: ---> 46 if ext in fsspec.utils.compressions and not force: 47 raise ValueError( 48 "Duplicate compression file extension: %s (%s)" % (ext, name) AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: Jupyter notebook - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3902/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3901/comments
https://api.github.com/repos/huggingface/datasets/issues/3901/events
https://github.com/huggingface/datasets/issues/3901
1,167,339,773
I_kwDODunzps5FlDD9
3,901
Dataset viewer issue for IndicParaphrase- the preview doesn't show
{ "login": "ratishsp", "id": 3006607, "node_id": "MDQ6VXNlcjMwMDY2MDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3006607?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ratishsp", "html_url": "https://github.com/ratishsp", "followers_url": "https://api.github.com/users/ratishsp/followers", "following_url": "https://api.github.com/users/ratishsp/following{/other_user}", "gists_url": "https://api.github.com/users/ratishsp/gists{/gist_id}", "starred_url": "https://api.github.com/users/ratishsp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ratishsp/subscriptions", "organizations_url": "https://api.github.com/users/ratishsp/orgs", "repos_url": "https://api.github.com/users/ratishsp/repos", "events_url": "https://api.github.com/users/ratishsp/events{/privacy}", "received_events_url": "https://api.github.com/users/ratishsp/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture d’écran 2022-04-12 aΜ€ 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n" ]
"2022-03-12T16:56:05"
"2022-04-12T12:10:50"
"2022-04-12T12:10:49"
NONE
null
## Dataset viewer issue for '*IndicParaphrase*' **Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)* *The preview of the dataset doesn't come up. The error on the console is: Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'* Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3901/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3900
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3900/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3900/comments
https://api.github.com/repos/huggingface/datasets/issues/3900/events
https://github.com/huggingface/datasets/pull/3900
1,167,224,903
PR_kwDODunzps40VxRh
3,900
Add MetaShift dataset
{ "login": "dnaveenr", "id": 17746528, "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dnaveenr", "html_url": "https://github.com/dnaveenr", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "repos_url": "https://api.github.com/users/dnaveenr/repos", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Please could you review this when you get time. Thank you.", "Thanks a lot for your inputs @mariosasko .\r\n> Maybe we can add the generated meta-graphs to the card as images (with attributions)?\r\n\r\nYes. We can do this for the default set of classes. Will add this.\r\n\r\n> Would be cool if we could have them as additional configs. Also, maybe we could have configs that expose [image metadata](https://github.com/Weixin-Liang/MetaShift/tree/main/dataset/meta_data) from the https://nlp.stanford.edu/data/gqa/sceneGraphs.zip file (this file is downloaded in the script but not used).\r\n\r\nI'll try adding the bonus section as additional config. \r\nRegarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n", "> Regarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n\r\nOh, I forgot to mention that. Let's add a `Dataset Usage` section to the card to document the params (similar to this: https://huggingface.co/datasets/electricity_load_diagrams#dataset-usage). Also, feel free to add the constants that can be tuned as config params (e.g. `IMAGE_SUBSET_SIZE_THRESHOLD` or the `5` in `len(subject_data) <= 5`).", "Okay. Got it. Will add these and constants as config parameters.\r\n\r\nThe image metadata from scene graphs looks like this : \r\n```json\r\n{\r\n \"2407890\": {\r\n \"width\": 640,\r\n \"height\": 480,\r\n \"location\": \"living room\",\r\n \"weather\": none,\r\n \"objects\": {\r\n \"271881\": {\r\n \"name\": \"chair\",\r\n \"x\": 220,\r\n \"y\": 310,\r\n \"w\": 50,\r\n \"h\": 80,\r\n \"attributes\": [\"brown\", \"wooden\", \"small\"],\r\n \"relations\": {\r\n \"32452\": {\r\n \"name\": \"on\",\r\n \"object\": \"275312\"\r\n },\r\n \"32452\": {\r\n \"name\": \"near\",\r\n \"object\": \"279472\"\r\n } \r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n``load_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...], image_metadata=True)``\r\nHow do we showcase/display the image metadata(json) information ?\r\n", "> How do we showcase/display the image metadata(json) information ?\r\n\r\nWe can add the JSON fields as keys to the features dict:\r\n```python\r\n if self.config.image_metadata:\r\n features.update({\"width\": Value(\"int\"), \"height\": Value(\"int\"), \"location\": Value(\"string\"), ...}) \r\n```\r\n\r\nP.S. Would rename `image_metadata` to `with_image_metadata` ", "I have added the following : \r\n- Added the meta-graphs to the card as images under the Section \"Dataset Meta-Graphs\".\r\n- Generate the Attributes-Dataset using config parameter. [ [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]\r\n- Expose image metadata using config parameter.\r\nFormat of the image metadata is as follows : [Link](https://cs.stanford.edu/people/dorarad/gqa/download.html)\r\nI have modified the \"Objects\" which is dict to a list of dicts with an additional parameter named object_id. \r\nI have defined the structure as follows : \r\n```\r\n{\r\n \"width\": datasets.Value(\"int64\"),\r\n \"height\": datasets.Value(\"int64\"),\r\n \"location\": datasets.Value(\"string\"),\r\n \"weather\": datasets.Value(\"string\"),\r\n \"objects\": datasets.Sequence(\r\n {\r\n \"object_id\": datasets.Value(\"string\"),\r\n \"name\": datasets.Value(\"string\"),\r\n \"x\": datasets.Value(\"int64\"),\r\n \"y\": datasets.Value(\"int64\"),\r\n \"w\": datasets.Value(\"int64\"),\r\n \"h\": datasets.Value(\"int64\"),\r\n \"attributes\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"relations\": datasets.Sequence(\r\n {\r\n \"name\": datasets.Value(\"string\"),\r\n \"object\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n }\r\n ),\r\n}\r\n```\r\nProblem is that objects is not being shown as list of dicts. The output looks as follows : \r\n\r\n> metashift_dataset['train'][0]\r\n\r\n```json \r\n{'image_id': '2338755', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x281 at 0x7F066C5A49D0>, 'label': 0, 'context': 'ground', 'width': 500, 'height': 281, 'location': None, 'weather': None, 'objects': {'object_id': ['3070704', '3070705', '3070706', '2416713', '3070702', '2790660', '3063157', '2354960', '2037127', '2392939', '2912743', '2125407', '2735257', '3260906', '2351018', '3288269', '3699852', '2734378', '3421201', '2863115'], 'name': ['bicycle', 'bicycle', 'bicycle', 'boot', 'bicycle', 'motorcycle', 'pepperoni', 'head', 'building', 'wall', 'shorts', 'people', 'wheel', 'bricks', 'man', 'cat', 'boot', 'door', 'ground', 'building'], 'x': [137, 371, 458, 215, 468, 399, 368, 245, 0, 140, 260, 284, 138, 451, 339, 187, 210, 26, 0, 313], 'y': [116, 86, 94, 150, 91, 80, 107, 22, 0, 44, 109, 69, 145, 226, 69, 22, 230, 0, 119, 0], 'w': [197, 27, 15, 73, 24, 53, 9, 37, 289, 46, 43, 30, 74, 28, 35, 116, 53, 107, 500, 55], 'h': [126, 25, 38, 128, 43, 50, 16, 44, 158, 73, 51, 52, 97, 15, 73, 252, 46, 147, 162, 77], 'attributes': [[], [], [], ['white'], [], [], [], [], [], [], [], [], [], [], [], ['white'], ['white'], ['large', 'black'], ['brick'], []], 'relations': [{'name': ['to the left of'], 'object': ['3260906']}, {'name': ['to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['3070706', '2351018', '2125407', '2790660', '2037127', '3070702', '3288269']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the right of'], 'object': ['2351018', '3070705', '3070702', '2790660', '3063157']}, {'name': ['to the right of'], 'object': ['2735257']}, {'name': ['to the right of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['2351018', '2790660', '3070706', '3070705', '3063157']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['3070705', '2351018', '3070702', '3070706', '3063157', '2125407', '2037127', '3288269']}, {'name': ['to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['2037127', '3070706', '3070702', '2912743', '3288269', '2790660', '2125407']}, {'name': ['to the left of', 'to the right of'], 'object': ['2863115', '2734378']}, {'name': ['to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['3070705', '2351018', '3063157', '2125407', '2790660', '2863115']}, {'name': ['to the left of', 'to the right of', 'to the left of'], 'object': ['2125407', '2734378', '3288269']}, {'name': ['to the left of', 'on', 'to the left of'], 'object': ['2351018', '3288269', '3063157']}, {'name': ['to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'to the left of'], 'object': ['3063157', '2351018', '2037127', '3070705', '2392939', '2790660']}, {'name': ['to the left of', 'to the left of'], 'object': ['2416713', '3288269']}, {'name': ['to the right of'], 'object': ['3070704']}, {'name': ['to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'walking down'], 'object': ['2037127', '2790660', '2125407', '3070705', '3070706', '2912743', '3070702', '3288269', '3421201']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2392939', '2734378', '2790660', '2735257', '3063157', '3070705', '2351018', '2863115']}, {'name': [], 'object': []}, {'name': ['of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2037127', '2354960', '3288269', '2392939']}, {'name': [], 'object': []}, {'name': ['to the right of', 'to the right of', 'to the right of'], 'object': ['2037127', '3288269', '2354960']}]}}\r\n```\r\nExpected output of image_metadata would be : \r\n```\r\n{'height': 281,\r\n 'location': None,\r\n 'objects': [{'attributes': [],\r\n 'h': 126,\r\n 'name': 'bicycle',\r\n 'object_id': '3070704',\r\n 'relations': [{'name': 'to the left of', 'object': '3260906'}],\r\n 'w': 197,\r\n 'x': 137,\r\n 'y': 116},\r\n {'attributes': [],\r\n 'h': 25,\r\n 'name': 'bicycle',\r\n 'object_id': '3070705',\r\n 'relations': [{'name': 'to the left of', 'object': '3070706'},\r\n {'name': 'to the right of', 'object': '2351018'},\r\n {'name': 'to the right of', 'object': '2125407'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '3070702'},\r\n {'name': 'to the right of', 'object': '3288269'}],\r\n 'w': 27,\r\n 'x': 371,\r\n 'y': 86},\r\n {'attributes': ['white'],\r\n 'h': 252,\r\n 'name': 'cat',\r\n 'object_id': '3288269',\r\n 'relations': [{'name': 'to the right of', 'object': '2392939'},\r\n {'name': 'to the right of', 'object': '2734378'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2735257'},\r\n {'name': 'to the left of', 'object': '3063157'},\r\n {'name': 'to the left of', 'object': '3070705'},\r\n {'name': 'to the left of', 'object': '2351018'},\r\n {'name': 'to the left of', 'object': '2863115'}],\r\n 'w': 116,\r\n 'x': 187,\r\n 'y': 22},\r\n {'attributes': ['white'],\r\n 'h': 46,\r\n 'name': 'boot',\r\n 'object_id': '3699852',\r\n 'relations': [],\r\n 'w': 53,\r\n 'x': 210,\r\n 'y': 230},\r\n .\r\n .\r\n .\r\n {'attributes': ['large', 'black'],\r\n 'h': 147,\r\n 'name': 'door',\r\n 'object_id': '2734378',\r\n 'relations': [{'name': 'of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '2354960'},\r\n {'name': 'to the left of', 'object': '3288269'},\r\n {'name': 'to the left of', 'object': '2392939'}],\r\n 'w': 107,\r\n 'x': 26,\r\n 'y': 0},\r\n {'attributes': ['brick'],\r\n 'h': 162,\r\n 'name': 'ground',\r\n 'object_id': '3421201',\r\n 'relations': [],\r\n 'w': 500,\r\n 'x': 0,\r\n 'y': 119},\r\n {'attributes': [],\r\n 'h': 77,\r\n 'name': 'building',\r\n 'object_id': '2863115',\r\n 'relations': [{'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the right of', 'object': '3288269'},\r\n {'name': 'to the right of', 'object': '2354960'}],\r\n 'w': 55,\r\n 'x': 313,\r\n 'y': 0}],\r\n 'weather': None,\r\n 'width': 500}\r\n\r\n```\r\n\r\nMay I know how to get the list of dicts representation correctly ?\r\n\r\n---\r\nTo-Do : \r\n\r\n- [x] Generate dataset_infos.json file.\r\n- [x] Add β€œDataset Usage” section in the cards and write about the config parameters. \r\n- [x] Add the constants that can be tuned as config params.\r\n", "> Problem is that objects is not being shown as list of dicts. The output looks as follows :\r\n\r\nThat's expected. We convert a sequence of dictionaries to a dictionary of sequences to keep the formatting aligned with Tensorflow Datasets. You could disable this behavior by replacing `\"objects\": datasets.Sequence(object_fields_dict)` with `\"objects\": [object_fields_dict]`, but that's not what we usually do, so let's keep it like that. \r\n\r\nAlso, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the `src` attribute (and specify `alt` in case the URLs go down).\r\n\r\nI'll do a proper review again after you are finished with the dummy data.", "> That's expected.\r\n\r\nOkay. Got it. Thanks. I thought I was doing something wrong.\r\n\r\n> Also, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the src attribute (and specify alt in case the URLs go down).\r\n\r\nSure. Where do we host these images ? Can I upload them to any free image hosting platform or is there any particular website you use ?\r\n\r\n> I'll do a proper review again after you are finished with the dummy data.\r\n\r\nSure. Thanks. I'm working on this part. Will update you.\r\n", "Update : \r\n- I have generated the dataset_infos.json file.\r\n\r\n> I suggest you try to generate the dataset_infos.json file first, and then I can help with the dummy data.\r\n\r\nI am having issues creating the dummy data. I get the following which I use the command : \r\n\r\n`datasets-cli dummy_data datasets/metashift`\r\n\r\n```\r\nDataset metashift with config MetashiftConfig(name='metashift', version=1.0.0, data_dir=None, data_files=None, description=None) seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/datasets/commands/dummy_data.py\", line 324, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/datasets/commands/dummy_data.py\", line 407, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```", "> Feel free to host the images online (on imgur for example) :)\r\n\r\nSure. Will do that.\r\n\r\nThanks for the explanation regarding the dummy data zip files. I will try it out and let you know.", "Instead of uploading the images to a hosting service, you can directly reference their GitHub URLs (open the image in the MetaShift repo -> click Download -> copy the image URL). For instance, this is the URL of one of the images:`https://raw.githubusercontent.com/Weixin-Liang/MetaShift/main/docs/figures/Cat-MetaGraph.jpg`. Also, feel free to replace `main` with the most recent commit hash in the copied URLs to make them more robust.", "@mariosasko I've actually created metagraphs for all the default classes other than those present in the GitHub Repo and included all of them. :) The Repo has them only for two classes.\r\n\r\nIn case we want to limit the no.of meta graphs included, we can stick to the github URLs from the repo itself.\r\n", "Update : \r\n- I could add the dummy data and get the dummy data test to work. Since we have a preprocessing step on the dataset, one of the .pkl file size is on the higher side. This was done for the tests to pass. I hope that is okay. The dummy.zip file size is about 273K.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Structure in the data cards to include Data Instances when config parameters are used.\r\n\r\nPlease could you review when you get time. Thank you.", "Thanks a lot for your suggestions, Mario. The thing I learnt from the review is that I need to make better sentence formations. I will keep this in mind. :) ", "Thanks a lot for your support. @mariosasko and @lhoestq .\r\n\r\n> Super impressed by your work on this, congrats :)\r\n\r\nIts my first dataset contribution to the πŸ€— Datasets library, I'm super excited. Thank you. :)\r\n\r\nAlso, I think we can close this request issue now, [#3813](https://github.com/huggingface/datasets/issues/3813)" ]
"2022-03-12T08:44:18"
"2022-04-01T16:59:48"
"2022-04-01T15:16:30"
CONTRIBUTOR
null
This PR adds the MetaShift dataset. Dataset Request : Add MetaShift dataset [#3813](https://github.com/huggingface/datasets/issues/3813) @lhoestq As discussed, - I have copied the preprocessing script and modified it as required to not create new directories and folders and instead yield the images. - I do the preprocessing in _split_generators to get the required data which is then passed to _generate_examples. - Beyond the generated MetaShift dataset, the original preprocess script also generates the meta-graphs for each class, I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#generate-full-metashift) ] - There is a Bonus section, the authors share. I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ] - I had a basic test script which downloaded the dataset and tested the basic functionality. Things seems fine. For real data, I performed the following test : ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_metashift ============================================== test session starts =============================================== platform linux -- Python 3.7.11, pytest-7.0.1, pluggy-1.0.0 rootdir: ./datasets plugins: hydra-core-1.1.1, datadir-1.3.1, forked-1.4.0, xdist-2.5.0 collected 1 item tests/test_dataset_common.py . [100%] ========================================= 1 passed in 4821.25s (1:20:21) ========================================= ``` - I couldn't get the dummy dataset. Need some inputs here. Error as follows : ``` Using custom data configuration default Dataset metashift with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl. for split in generator_splits: UnboundLocalError: local variable 'generator_splits' referenced before assignment ``` To-Do : - [x] Currently I am using the default _SELECTED_CLASSES. I need to use config option here as suggested - [x] Complete fields in the Dataset Card. - [x] Tagging the dataset using the Datasets Tagging app. Need your help and suggestions for improvement. Thank you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3900/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3900/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3900", "html_url": "https://github.com/huggingface/datasets/pull/3900", "diff_url": "https://github.com/huggingface/datasets/pull/3900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3900.patch", "merged_at": "2022-04-01T15:16:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/3899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3899/comments
https://api.github.com/repos/huggingface/datasets/issues/3899/events
https://github.com/huggingface/datasets/pull/3899
1,166,931,812
PR_kwDODunzps40UzR3
3,899
Add exact match metric
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
"2022-03-11T22:21:40"
"2022-03-21T16:10:03"
"2022-03-21T16:05:35"
CONTRIBUTOR
null
Adding the exact match metric and its metric card. Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3899/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3899", "html_url": "https://github.com/huggingface/datasets/pull/3899", "diff_url": "https://github.com/huggingface/datasets/pull/3899.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3899.patch", "merged_at": "2022-03-21T16:05:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/3898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3898/comments
https://api.github.com/repos/huggingface/datasets/issues/3898/events
https://github.com/huggingface/datasets/pull/3898
1,166,778,250
PR_kwDODunzps40UWG4
3,898
Create README.md for WER metric
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3898). All of your documentation changes will be reflected on that endpoint.", "For ASR you can probably ping @patrickvonplaten ", "Ah only noticed now that ` # Values from popular papers` is from a template. @lhoestq @sashavor - not really sure if this section is useful in general really. \r\n\r\nIMO, it's more confusing/misleading than it helps. E.g. a value of 0.03 WER on a fake read-out audio dataset is not better than a WER of 0.3 on a real-world noisy, conversational audio dataset. I think the same holds true for other metrics no? I can think of very little metrics where a metric value is not dataset dependent. E.g. perplexity is super dataset dependent, summarization metrics like ROUGE as well, ...\r\n\r\nAlso, I don't really see what this section tries to achieve - is the idea here to give the reader some papers that use this metric to better understand in which context it is used? Should we maybe rename the section to `Popular papers making use of this metric` or something? \r\n\r\n", "I put \"Values from popular papers\" as a subsection of \"Output values\" -- I hope that's a compromise that works for everyone :hugs: " ]
"2022-03-11T19:29:09"
"2022-03-15T17:05:00"
"2022-03-15T17:04:59"
NONE
null
Proposing a draft WER metric card, @lhoestq I'm not very certain about "Values from popular papers" -- I don't know ASR very well, what do you think of the examples I found?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3898/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3898", "html_url": "https://github.com/huggingface/datasets/pull/3898", "diff_url": "https://github.com/huggingface/datasets/pull/3898.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3898.patch", "merged_at": "2022-03-15T17:04:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/3897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3897/comments
https://api.github.com/repos/huggingface/datasets/issues/3897/events
https://github.com/huggingface/datasets/pull/3897
1,166,715,104
PR_kwDODunzps40UJH4
3,897
Align tqdm control/cache control with Transformers
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3897). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-11T18:12:22"
"2022-03-14T15:01:10"
"2022-03-14T15:01:08"
CONTRIBUTOR
null
This PR: * aligns the `tqdm` logic with Transformers (follows https://github.com/huggingface/transformers/pull/15167) by moving the code to `utils/logging.py`, adding `enable_progres_bar`/`disable_progres_bar` and removing `set_progress_bar_enabled` (a note for @lhoestq: I'm not adding `logging.tqdm` to the public namespace in this PR to avoid the situation where `from datasets import *; tqdm` would overshadow the standard `tqdm` * aligns the cache control with the new `tqdm` logic by adding `enable_caching`/`disable_caching` to the public namespace and deprecating `set_caching_enabled` (not fully removing it because it's used more often than `set_progress_bar_enabled` and has a dedicated example in the old docs) Fix #3586
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3897/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3897/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3897", "html_url": "https://github.com/huggingface/datasets/pull/3897", "diff_url": "https://github.com/huggingface/datasets/pull/3897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3897.patch", "merged_at": "2022-03-14T15:01:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/3896
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3896/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3896/comments
https://api.github.com/repos/huggingface/datasets/issues/3896/events
https://github.com/huggingface/datasets/issues/3896
1,166,628,270
I_kwDODunzps5FiVWu
3,896
Missing google file for `multi_news` dataset
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[]
null
[ "reported by @abidlabs ", "related to https://github.com/huggingface/datasets/pull/3843?", "`datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.\r\n\r\nWhen loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :)", "That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...\r\nOnce merged, that will fix this issue. ", "OK. Should fix the viewer for 50 datasets\r\n\r\n<img width=\"148\" alt=\"Capture d’écran 2022-03-14 aΜ€ 11 51 02\" src=\"https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png\">\r\n" ]
"2022-03-11T16:38:10"
"2022-03-15T12:30:23"
"2022-03-15T12:30:23"
CONTRIBUTOR
null
## Dataset viewer issue for '*multi_news*' **Link:** https://huggingface.co/datasets/multi_news ``` Server error Status code: 400 Exception: FileNotFoundError Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src ``` Am I the one who added this dataset ? No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3896/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3896/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3895
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3895/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3895/comments
https://api.github.com/repos/huggingface/datasets/issues/3895/events
https://github.com/huggingface/datasets/pull/3895
1,166,619,182
PR_kwDODunzps40T1C8
3,895
Fix code examples indentation
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895). All of your documentation changes will be reflected on that endpoint.", "Still not rendered properly: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping", "My last commit should have fixed it, I don't know why the dev doc build is not showing my last changes", "Let me merge this and we can see on `master` how it renders, until the dev doc build is fixed" ]
"2022-03-11T16:29:04"
"2022-03-11T17:34:30"
"2022-03-11T17:34:29"
MEMBER
null
Some code examples are currently not rendered correctly. I think this is because they are over-indented cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3895/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3895/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3895", "html_url": "https://github.com/huggingface/datasets/pull/3895", "diff_url": "https://github.com/huggingface/datasets/pull/3895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3895.patch", "merged_at": "2022-03-11T17:34:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/3894
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3894/comments
https://api.github.com/repos/huggingface/datasets/issues/3894/events
https://github.com/huggingface/datasets/pull/3894
1,166,611,270
PR_kwDODunzps40TzXW
3,894
[docs] make dummy data creation optional
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.", "The dev doc build rendering doesn't seem to be updated with my last commit for some reason", "Merging it anyway since I'd like to share this page with users πŸ™ƒ " ]
"2022-03-11T16:21:34"
"2022-03-11T17:27:56"
"2022-03-11T17:27:55"
MEMBER
null
Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional. We can discuss later to make them optional for datasets in this repository as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3894/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3894", "html_url": "https://github.com/huggingface/datasets/pull/3894", "diff_url": "https://github.com/huggingface/datasets/pull/3894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3894.patch", "merged_at": "2022-03-11T17:27:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/3893
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3893/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3893/comments
https://api.github.com/repos/huggingface/datasets/issues/3893/events
https://github.com/huggingface/datasets/pull/3893
1,166,551,684
PR_kwDODunzps40TmxB
3,893
Add default branch for doc building
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3893). All of your documentation changes will be reflected on that endpoint.", "Yes! And when we discovered on the Transformers side that this check fails on the GitHub actions, we added a config attribute to have a default. Setting in Transformers fixed the issue of the doc being deployed to main, so porting the fix here too :-)" ]
"2022-03-11T15:24:27"
"2022-03-11T15:34:35"
"2022-03-11T15:34:34"
CONTRIBUTOR
null
Since other libraries use `main` as their default branch and it's now the standard default, you have to specify a different name in the doc config if you're using `master` like datasets (`doc-builder` tries to guess it, but in the job, we have weird checkout of merge commits so it doesn't always manage to get it right). This PR makes sure it will always use master for the dev doc (until you decide to switchto main)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3893/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3893/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3893", "html_url": "https://github.com/huggingface/datasets/pull/3893", "diff_url": "https://github.com/huggingface/datasets/pull/3893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3893.patch", "merged_at": "2022-03-11T15:34:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/3892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3892/comments
https://api.github.com/repos/huggingface/datasets/issues/3892/events
https://github.com/huggingface/datasets/pull/3892
1,166,227,003
PR_kwDODunzps40ShYB
3,892
Fix CLI test checksums
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3892). All of your documentation changes will be reflected on that endpoint.", "Feel free to merge if it's good for you :)", "I've added a test @lhoestq. Once all green, I'll merge. ", "Last failing tests do not have nothing to do with this PR." ]
"2022-03-11T10:04:04"
"2022-03-15T12:28:24"
"2022-03-15T12:28:23"
MEMBER
null
Previous PR: - #3796 introduced a side effect: `datasets-cli test` generates `dataset_infos.json` with `None` checksum values. See: - #3805 This PR introduces a way for `datasets-cli test` to force to record infos, even if `verify_infos=False` Close #3848. CC: @craffel
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3892/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3892/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3892", "html_url": "https://github.com/huggingface/datasets/pull/3892", "diff_url": "https://github.com/huggingface/datasets/pull/3892.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3892.patch", "merged_at": "2022-03-15T12:28:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/3891
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3891/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3891/comments
https://api.github.com/repos/huggingface/datasets/issues/3891/events
https://github.com/huggingface/datasets/pull/3891
1,165,503,732
PR_kwDODunzps40QKIG
3,891
Fix race condition in doc build
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3891). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T17:17:10"
"2022-03-10T17:23:00"
"2022-03-10T17:17:30"
MEMBER
null
Following https://github.com/huggingface/datasets/runs/5499386744 it seems that race conditions that create issues when updating the doc. I took the same approach as in `transformers` to fix race conditions
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3891/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3891/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3891", "html_url": "https://github.com/huggingface/datasets/pull/3891", "diff_url": "https://github.com/huggingface/datasets/pull/3891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3891.patch", "merged_at": "2022-03-10T17:17:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/3890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3890/comments
https://api.github.com/repos/huggingface/datasets/issues/3890/events
https://github.com/huggingface/datasets/pull/3890
1,165,502,838
PR_kwDODunzps40QJ8V
3,890
Update beans download urls
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3890). All of your documentation changes will be reflected on that endpoint.", "@albertvillanova Thanks for investigating and fixing that issue. I regenerated the `dataset_infos.json` file." ]
"2022-03-10T17:16:16"
"2022-03-15T16:47:30"
"2022-03-15T15:26:48"
CONTRIBUTOR
null
Replace the old URLs with the Hub [URLs](https://huggingface.co/datasets/beans/tree/main/data). Also reported by @stevhliu. Fix #3889
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3890/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3890/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3890", "html_url": "https://github.com/huggingface/datasets/pull/3890", "diff_url": "https://github.com/huggingface/datasets/pull/3890.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3890.patch", "merged_at": "2022-03-15T15:26:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/3889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3889/comments
https://api.github.com/repos/huggingface/datasets/issues/3889/events
https://github.com/huggingface/datasets/issues/3889
1,165,456,083
I_kwDODunzps5Fd3LT
3,889
Cannot load beans dataset (Couldn't reach the dataset)
{ "login": "ivsanro1", "id": 30293331, "node_id": "MDQ6VXNlcjMwMjkzMzMx", "avatar_url": "https://avatars.githubusercontent.com/u/30293331?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ivsanro1", "html_url": "https://github.com/ivsanro1", "followers_url": "https://api.github.com/users/ivsanro1/followers", "following_url": "https://api.github.com/users/ivsanro1/following{/other_user}", "gists_url": "https://api.github.com/users/ivsanro1/gists{/gist_id}", "starred_url": "https://api.github.com/users/ivsanro1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ivsanro1/subscriptions", "organizations_url": "https://api.github.com/users/ivsanro1/orgs", "repos_url": "https://api.github.com/users/ivsanro1/repos", "events_url": "https://api.github.com/users/ivsanro1/events{/privacy}", "received_events_url": "https://api.github.com/users/ivsanro1/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :)" ]
"2022-03-10T16:34:08"
"2022-03-15T15:26:47"
"2022-03-15T15:26:47"
NONE
null
## Describe the bug The beans dataset is unavailable to download. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('beans') ``` ## Expected results The dataset would be downloaded with no issue. ## Actual results ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403) ``` [It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip ) ## Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3889/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3888/comments
https://api.github.com/repos/huggingface/datasets/issues/3888/events
https://github.com/huggingface/datasets/issues/3888
1,165,435,529
I_kwDODunzps5FdyKJ
3,888
IterableDataset columns and feature types
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false } ]
null
[ "#self-assign", "@alvarobartt I've assigned you the issue since I'm not actively working on it.", "Cool thanks @mariosasko I'll try to fix it in the upcoming days, thanks!", "@lhoestq so in order to address what’s not completed in this issue, do you think it makes sense to add a param `features` to `IterableDataset.map` so that the output features right after the `map` are defined there? ", "Yes that would be ideal IMO, thanks again for the help :)", "@lhoestq cool then if you agree I can work on that! I’ll also update the docs accordingly once done, thanks!", "I've already started with a PR as a draft @lhoestq, should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so? Thanks!", "> should we also try to look for a way to explicitly request pre-fetching right after a map operation is applied, so that the features are inferred if the user says explicitly so?\r\n\r\nRight now one can use `ds = ds._resolve_features()` do to so. It can be used after `map` or `load_dataset` if the features are not known. Maybe we can make this method public ?" ]
"2022-03-10T16:19:12"
"2022-11-29T11:39:24"
null
MEMBER
null
Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None` However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models. Here are a few cases that lead to `features` being `None`: 1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset 2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map` 3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map` Things we can consider, for each point above: 1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features` 2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API) 2.b prefetch the first output value to infer the type 3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user cc @mariosasko @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3888/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3888/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3887
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3887/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3887/comments
https://api.github.com/repos/huggingface/datasets/issues/3887/events
https://github.com/huggingface/datasets/pull/3887
1,165,380,852
PR_kwDODunzps40PwqT
3,887
ImageFolder improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3887). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T15:34:46"
"2022-03-11T15:06:11"
"2022-03-11T15:06:11"
CONTRIBUTOR
null
This PR adds the following improvements to the `imagefolder` dataset: * skip the extract step for image files (as discussed in https://github.com/huggingface/datasets/pull/2830#discussion_r816817919) * option to drop labels by setting `drop_labels=True` (useful for image pretraining cc @NielsRogge). This is faster than loading a dataset and removing the `label` column because we don't need to iterate over the files to infer class labels.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3887/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3887/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3887", "html_url": "https://github.com/huggingface/datasets/pull/3887", "diff_url": "https://github.com/huggingface/datasets/pull/3887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3887.patch", "merged_at": "2022-03-11T15:06:11" }
true
https://api.github.com/repos/huggingface/datasets/issues/3886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3886/comments
https://api.github.com/repos/huggingface/datasets/issues/3886/events
https://github.com/huggingface/datasets/pull/3886
1,165,223,319
PR_kwDODunzps40PO6W
3,886
Retry HfApi call inside push_to_hub when 504 error
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3886). All of your documentation changes will be reflected on that endpoint.", "I made it more robust by increasing the wait time, and I also added some logs when a request is retried. Let me know if it's ok for you", "At the end you did not set the agreed max value of 60s. \r\n\r\nMoreover, with the new numbers, there is a slight contradiction: although you set max_retries=5, we will only make 4 retries at most because of the combined values of `base_wait_time` and `max_wait_time`.", "Yea I thought that in total we could wait 1min, but if we have a max_wait_time of 20sec between each request it's fine IMO\r\n\r\n> Moreover, with the new numbers, there is a slight contradiction: although you set max_retries=5, we will only make 4 retries at most because of the combined values of base_wait_time and max_wait_time.\r\n\r\nWhat makes you think this ? If the exponential wait time becomes bigger than `max_wait_time` then it still does the retry, but after a wait time of `max_wait_time`", "Sorry, I meant 4 retries **with exponential backoff**; the fifth one is with constant backoff.", "OK, and one question: do you think that the retries do not affect the time the server needs to be operational again and able to process the request? I guess that if does not affect, then the cause are other users' requests, or others; not our specific request.\r\n\r\nJust to be sure: \r\n- Then 20s at most between consecutive requests do not impact the server.\r\n- And we expect after a total of 5 retries (within a total 50s of wait time + request processing/uploading time), the server should be able to come back to normality.", "> do you think that the retries do not affect the time the server needs for being able to process the request (I guess in this case the cause are other users' requests, or other causes; not our specific request).\r\n\r\nYes I don't think the retries would affect the server, I think the cause of the 504 errors is elsewhere\r\n\r\n> Just to be sure:\r\n>\r\n> Then 20s at most between consecutive requests do not impact the server.\r\n> And we expect after a total of 5 retries (within a total 50s of wait time + request processing/uploading time), the server should be able to come back to normality.\r\n\r\nYes I think it's fine for now, we can still adapt this later if needed", "Will be curious to see the impact of this in terms of upload reliability! Don't forget to let us know when you have more data. cc @huggingface/moon-landing-back " ]
"2022-03-10T13:24:40"
"2022-03-16T09:00:56"
"2022-03-15T16:19:50"
MEMBER
null
Ass suggested by @lhoestq in #3872, this PR: - Implements a retry function - Retries HfApi call inside `push_to_hub` when 504 error. To be agreed: - max_retries = 2 (at 0.5 and 1 seconds) Fix #3872.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3886/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3886/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3886", "html_url": "https://github.com/huggingface/datasets/pull/3886", "diff_url": "https://github.com/huggingface/datasets/pull/3886.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3886.patch", "merged_at": "2022-03-15T16:19:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/3885
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3885/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3885/comments
https://api.github.com/repos/huggingface/datasets/issues/3885/events
https://github.com/huggingface/datasets/pull/3885
1,165,102,209
PR_kwDODunzps40O00Z
3,885
Fix some shuffle docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3885). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T11:29:15"
"2022-03-10T14:16:29"
"2022-03-10T14:16:28"
MEMBER
null
Following #3842 some docs were still outdated (with `buffer_size` as the first argument)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3885/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3885/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3885", "html_url": "https://github.com/huggingface/datasets/pull/3885", "diff_url": "https://github.com/huggingface/datasets/pull/3885.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3885.patch", "merged_at": "2022-03-10T14:16:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/3884
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3884/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3884/comments
https://api.github.com/repos/huggingface/datasets/issues/3884/events
https://github.com/huggingface/datasets/pull/3884
1,164,924,314
PR_kwDODunzps40OPM9
3,884
Fix bug in METEOR metric due to nltk version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3884). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T08:44:20"
"2022-03-10T09:03:40"
"2022-03-10T09:03:39"
MEMBER
null
Fix #3883.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3884/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3884/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3884", "html_url": "https://github.com/huggingface/datasets/pull/3884", "diff_url": "https://github.com/huggingface/datasets/pull/3884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3884.patch", "merged_at": "2022-03-10T09:03:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/3883
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3883/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3883/comments
https://api.github.com/repos/huggingface/datasets/issues/3883/events
https://github.com/huggingface/datasets/issues/3883
1,164,663,229
I_kwDODunzps5Fa1m9
3,883
The metric Meteor doesn't work for nltk ==3.6.4
{ "login": "zhaowei-wang-nlp", "id": 22047467, "node_id": "MDQ6VXNlcjIyMDQ3NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaowei-wang-nlp", "html_url": "https://github.com/zhaowei-wang-nlp", "followers_url": "https://api.github.com/users/zhaowei-wang-nlp/followers", "following_url": "https://api.github.com/users/zhaowei-wang-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zhaowei-wang-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaowei-wang-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaowei-wang-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zhaowei-wang-nlp/orgs", "repos_url": "https://api.github.com/users/zhaowei-wang-nlp/repos", "events_url": "https://api.github.com/users/zhaowei-wang-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaowei-wang-nlp/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zhaowei-wang98, thanks for reporting.\r\n\r\nWe are fixing it... " ]
"2022-03-10T02:28:27"
"2022-03-10T09:03:39"
"2022-03-10T09:03:39"
NONE
null
## Describe the bug Using the metric Meteor with nltk == 3.6.4 gives a TypeError: TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object ## Steps to reproduce the bug ```python import datasets metric = datasets.load_metric("meteor") predictions = ["hello world"] references = ["hello world"] metric.compute(predictions=predictions, references=references) ``` ## Expected results TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object I think this TypeError exists because input sentences are tokenized into lists of tokens and the str.lower() is applied to this list of tokens. ## Actual results No error but a meteor score ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: linux - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3883/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3883/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3882
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3882/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3882/comments
https://api.github.com/repos/huggingface/datasets/issues/3882/events
https://github.com/huggingface/datasets/pull/3882
1,164,595,388
PR_kwDODunzps40NKz7
3,882
Image process doc
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3882). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-10T00:32:10"
"2022-03-15T15:24:16"
"2022-03-15T15:24:09"
MEMBER
null
This PR is a first draft of how to process image data. It adds: - Load an image dataset with `image` and `path` (adds tip about `decode=False` param to access the path and bytes, thanks to @mariosasko). - Load an image using the `ImageFolder` builder. I know there is an [example](https://huggingface.co/docs/datasets/master/en/loading#image-folders) of this already, but I also wanted to add it here so users don't miss it. This doc seems important for centralizing all of the image-related things so far. Datasets has grown so quickly πŸš€ now that I think maybe splitting up the How-to guides by modality may be better since working with vision/audio data is slightly different from what users have seen up until now. This way we can continue to scale the docs to better accommodate vision/audio things. - Add a data augmentation with `set_transform`. There is only 1 example here so far, but we can certainly add more. Todo: - [x] Couldn't figure out why my augmentation function works with `set_transform` but not `map` πŸ₯². Working with @mariosasko on this!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3882/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3882/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3882", "html_url": "https://github.com/huggingface/datasets/pull/3882", "diff_url": "https://github.com/huggingface/datasets/pull/3882.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3882.patch", "merged_at": "2022-03-15T15:24:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/3881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3881/comments
https://api.github.com/repos/huggingface/datasets/issues/3881/events
https://github.com/huggingface/datasets/issues/3881
1,164,452,005
I_kwDODunzps5FaCCl
3,881
How to use Image folder
{ "login": "INF800", "id": 45640029, "node_id": "MDQ6VXNlcjQ1NjQwMDI5", "avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4", "gravatar_id": "", "url": "https://api.github.com/users/INF800", "html_url": "https://github.com/INF800", "followers_url": "https://api.github.com/users/INF800/followers", "following_url": "https://api.github.com/users/INF800/following{/other_user}", "gists_url": "https://api.github.com/users/INF800/gists{/gist_id}", "starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/INF800/subscriptions", "organizations_url": "https://api.github.com/users/INF800/orgs", "repos_url": "https://api.github.com/users/INF800/repos", "events_url": "https://api.github.com/users/INF800/events{/privacy}", "received_events_url": "https://api.github.com/users/INF800/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Even this from docs throw same error\r\n```\r\ndataset = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\n\r\n```", "Hi @INF800,\r\n\r\nPlease note that the `imagefolder` feature enhancement was just recently merged to our master branch (https://github.com/huggingface/datasets/commit/207be676bffe9d164740a41a883af6125edef135), but has not yet been released.\r\n\r\nWe are planning to make the 2.0 release of our library in the coming days and then that feature will be available by updating your `datasets` library from PyPI.\r\n\r\nIn the meantime, you can incorporate that feature if you install our library from our GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n\r\nThen:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ds = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\nUsing custom data configuration default-7eb4e80d960deb18\r\nDownloading and preparing dataset image_folder/default to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60...\r\nDownloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 690.19it/s]\r\nExtracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 852.85it/s]\r\nDataset image_folder downloaded and prepared to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60. Subsequent calls will reuse this data.\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDataset({\r\n features: ['image', 'label'],\r\n num_rows: 25000\r\n})\r\n```", "Hey @albertvillanova. Does this load entire dataset in memory? Because I am facing huge trouble with loading very big datasets (OOM errors)", "Can you provide the error stack trace? The loader only stores the `data_files` dict, which can get big after globbing. Then, the OOM error would mean you don't have enough memory to keep all the paths to the image files. You can circumvent this by generating an archive and loading the dataset from there. Maybe we can optimize the globbing part in our data files resolution at some point, cc @lhoestq for visibility.", "Hey, memory error is resolved. It was fluke.\r\n\r\nBut there is another issue. Currently `load_dataset(\"imagefolder\", data_dir=\"./path/to/train\",)` takes only `train` as arg to `split` parameter.\r\n\r\nI am creating vaildation dataset using\r\n\r\n```\r\nds_valid = datasets.DatasetDict(valid=load_dataset(\"imagefolder\", data_dir=\"./path/to/valid\",)['train'])\r\n```", "`data_dir=\"path/to/folder\"` is a shorthand syntax fox `data_files={\"train\": \"path/to/folder/**\"}`, so use `data_files` in that case instead:\r\n```python\r\nds = load_dataset(\"imagefolder\", data_files={\"train\": \"path/to/train/**\", \"test\": \"path/to/test/**\", \"valid\": \"path/to/valid/**\"})\r\n```", "And there was another issue. I loaded black and white images (jpeg file). Using load dataset. It reads it as PIL jpeg data format. But instead of converting it into 3 channel tensor, input to collator function is coming as a single channel tensor.", "We don't apply any additional preprocessing on top of `PIL.Image.open(image_file)`, so you need to do the conversion yourself:\r\n\r\n```python\r\ndef to_rgb(batch):\r\n batch[\"image\"] = [img.convert(\"RGB\") for img in batch[\"image\"]]\r\n return batch\r\n\r\nds_rgb = ds.map(to_rgb, batched=True)\r\n```\r\n\r\nPlease use our Forum for questions of this kind in the future." ]
"2022-03-09T21:18:52"
"2022-03-11T08:45:52"
"2022-03-11T08:45:52"
NONE
null
Ran this code ``` load_dataset("imagefolder", data_dir="./my-dataset") ``` `https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /tmp/ipykernel_33/1648737256.py in <module> ----> 1 load_dataset("imagefolder", data_dir="./my-dataset") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1684 revision=revision, 1685 use_auth_token=use_auth_token, -> 1686 **config_kwargs, 1687 ) 1688 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1511 download_config.use_auth_token = use_auth_token 1512 dataset_module = dataset_module_factory( -> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1514 ) 1515 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1202 ) from None 1203 raise e1 from None 1204 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3881/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3881/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3880
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3880/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3880/comments
https://api.github.com/repos/huggingface/datasets/issues/3880/events
https://github.com/huggingface/datasets/pull/3880
1,164,406,008
PR_kwDODunzps40MjM3
3,880
Change the framework switches to the new syntax
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3880). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3880). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-09T20:29:10"
"2022-03-15T14:13:28"
"2022-03-15T14:13:27"
CONTRIBUTOR
null
This PR updates the syntax of the framework-specific code samples. With this new syntax, you'll be able to: - have paragraphs of text be framework-specific instead of just code samples - have support for Flax code samples if you want. This should be merged after https://github.com/huggingface/doc-builder/pull/63 and https://github.com/huggingface/doc-builder/pull/130
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3880/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3880/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3880", "html_url": "https://github.com/huggingface/datasets/pull/3880", "diff_url": "https://github.com/huggingface/datasets/pull/3880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3880.patch", "merged_at": "2022-03-15T14:13:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/3879
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3879/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3879/comments
https://api.github.com/repos/huggingface/datasets/issues/3879/events
https://github.com/huggingface/datasets/pull/3879
1,164,311,612
PR_kwDODunzps40MP7f
3,879
SQuAD v2 metric: create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3879). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-09T18:47:56"
"2022-03-10T16:48:59"
"2022-03-10T16:48:59"
NONE
null
Proposing SQuAD v2 metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3879/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3879/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3879", "html_url": "https://github.com/huggingface/datasets/pull/3879", "diff_url": "https://github.com/huggingface/datasets/pull/3879.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3879.patch", "merged_at": "2022-03-10T16:48:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/3878
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3878/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3878/comments
https://api.github.com/repos/huggingface/datasets/issues/3878/events
https://github.com/huggingface/datasets/pull/3878
1,164,305,335
PR_kwDODunzps40MOpn
3,878
Update cats_vs_dogs size
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3878). All of your documentation changes will be reflected on that endpoint.", "Maybe `NonMatchingSplitsSizesError` errors should also tell the user to try using a more recent version of the dataset to get the fixes ?", "@lhoestq Good idea. Will open a new PR to improve the error messages of NonMatchingSplitsSizesError, NonMatchingChecksumsError, ...", "It seems there is still a problem. I am using datasets version 2.5.1. \r\nI just typed `ds = load_dataset(\"cats_vs_dogs\")` and get the error below.\r\n\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=3893603, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=3891612, num_examples=23410, dataset_name='cats_vs_dogs')}]\r\n```\r\nIt looks like the dataset still only has 23,410 examples....\r\n", "Thanks for reporting, I opened https://github.com/huggingface/datasets/pull/5047" ]
"2022-03-09T18:40:56"
"2022-09-30T08:47:43"
"2022-03-10T14:21:23"
CONTRIBUTOR
null
It seems like 12 new examples have been added to the `cats_vs_dogs`. This PR updates the size in the card and the info file to avoid a verification error (reported by @stevhliu).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3878/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3878/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3878", "html_url": "https://github.com/huggingface/datasets/pull/3878", "diff_url": "https://github.com/huggingface/datasets/pull/3878.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3878.patch", "merged_at": "2022-03-10T14:21:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/3877
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3877/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3877/comments
https://api.github.com/repos/huggingface/datasets/issues/3877/events
https://github.com/huggingface/datasets/issues/3877
1,164,146,311
I_kwDODunzps5FY3aH
3,877
Align metadata to DCAT/DCAT-AP
{ "login": "EmidioStani", "id": 278367, "node_id": "MDQ6VXNlcjI3ODM2Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/278367?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EmidioStani", "html_url": "https://github.com/EmidioStani", "followers_url": "https://api.github.com/users/EmidioStani/followers", "following_url": "https://api.github.com/users/EmidioStani/following{/other_user}", "gists_url": "https://api.github.com/users/EmidioStani/gists{/gist_id}", "starred_url": "https://api.github.com/users/EmidioStani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EmidioStani/subscriptions", "organizations_url": "https://api.github.com/users/EmidioStani/orgs", "repos_url": "https://api.github.com/users/EmidioStani/repos", "events_url": "https://api.github.com/users/EmidioStani/events{/privacy}", "received_events_url": "https://api.github.com/users/EmidioStani/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2022-03-09T16:12:25"
"2022-03-09T16:33:42"
null
NONE
null
**Is your feature request related to a problem? Please describe.** Align to DCAT metadata to describe datasets **Describe the solution you'd like** Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant **Describe alternatives you've considered** **Additional context** DCAT is a W3C standard extended in Europe with DCAT-AP, an example is data.europa.eu publishing datasets metadata in DCAT-AP
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3877/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3877/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3876
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3876/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3876/comments
https://api.github.com/repos/huggingface/datasets/issues/3876/events
https://github.com/huggingface/datasets/pull/3876
1,164,045,075
PR_kwDODunzps40LYC8
3,876
Fix download_mode in dataset_module_factory
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3876). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-09T14:54:33"
"2022-03-10T08:47:00"
"2022-03-10T08:46:59"
MEMBER
null
Fix `download_mode` value set in `dataset_module_factory`. Before the fix, it was set to `bool` (default to `False`). Also set properly its default value in all public functions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3876/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3876/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3876", "html_url": "https://github.com/huggingface/datasets/pull/3876", "diff_url": "https://github.com/huggingface/datasets/pull/3876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3876.patch", "merged_at": "2022-03-10T08:46:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/3875
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3875/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3875/comments
https://api.github.com/repos/huggingface/datasets/issues/3875/events
https://github.com/huggingface/datasets/pull/3875
1,164,029,673
PR_kwDODunzps40LUuw
3,875
Module namespace cleanup for v2.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "will it solve https://github.com/huggingface/datasets-preview-backend/blob/4c542a74244045929615640ccbba5a902c344c5a/pyproject.toml#L85-L89?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3875). All of your documentation changes will be reflected on that endpoint.", "@severo No, this PR doesn't fix that issue in the current state. We can fix it by adding `__all__` to `datasets/__init__.py` and `datasets/formatting/__init__.py`. However, this would require updating `__all__` for each new function/class definition, which could become cumbersome, and we can't do this dynamically because `mypy` is a static type checker.\r\n\r\n@lhoestq @albertvillanova WDYT?", "Feel free to merge this one if it's good for you :)" ]
"2022-03-09T14:43:07"
"2022-03-11T15:42:06"
"2022-03-11T15:42:05"
CONTRIBUTOR
null
This is an attempt to make the user-facing `datasets`' submodule namespace cleaner: In particular, this PR does the following: * removes the unused `zip_nested` and `flatten_nest_dict` and their accompanying tests * removes `pyarrow` from the top-level namespace * properly uses `__all__` and the `from <module> import *` syntax to avoid importing the `<module>`'s submodules * cleans up the `utils` namespace * moves the `temp_seed` context manage from `datasets/utils/file_utils.py` to `datasets/utils/py_utils.py`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3875/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3875/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3875", "html_url": "https://github.com/huggingface/datasets/pull/3875", "diff_url": "https://github.com/huggingface/datasets/pull/3875.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3875.patch", "merged_at": "2022-03-11T15:42:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3874/comments
https://api.github.com/repos/huggingface/datasets/issues/3874/events
https://github.com/huggingface/datasets/pull/3874
1,164,013,511
PR_kwDODunzps40LRYD
3,874
add MSE and MAE metrics - V2
{ "login": "dnaveenr", "id": 17746528, "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dnaveenr", "html_url": "https://github.com/dnaveenr", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "repos_url": "https://api.github.com/users/dnaveenr/repos", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@mariosasko New PR here. I'm not sure how to add you as a co-author here. Also I see flake8 tests are failing, any inputs on how to resolve this ?\r\nAlso, let me know if any other changes are required. Thank you.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3874). All of your documentation changes will be reflected on that endpoint.", "Great. Thank you.", "Thanks so much for this πŸ™ πŸ’― " ]
"2022-03-09T14:30:16"
"2022-03-09T17:20:42"
"2022-03-09T17:18:20"
CONTRIBUTOR
null
Created a new pull request to resolve unrelated changes in PR caused due to rebasing. Ref Older PR : [#3845](https://github.com/huggingface/datasets/pull/3845) Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3874/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3874", "html_url": "https://github.com/huggingface/datasets/pull/3874", "diff_url": "https://github.com/huggingface/datasets/pull/3874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3874.patch", "merged_at": "2022-03-09T17:18:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/3873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3873/comments
https://api.github.com/repos/huggingface/datasets/issues/3873/events
https://github.com/huggingface/datasets/pull/3873
1,163,961,578
PR_kwDODunzps40LGoV
3,873
Create SQuAD metric README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3873). All of your documentation changes will be reflected on that endpoint.", "Oh one last thing I almost forgot, I think I would add a section \"Examples\" with examples of inputs and outputs and in particular: an example giving maximal values, an examples giving minimal values and maybe a standard examples from SQuAD. What do you think?" ]
"2022-03-09T13:47:08"
"2022-03-10T16:45:57"
"2022-03-10T16:45:57"
NONE
null
Proposal for a metrics card structure (with an example based on the SQuAD metric). @thomwolf @lhoestq @douwekiela @lewtun -- feel free to comment on structure or content (it's an initial draft, so I realize there's stuff missing!).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3873/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3873", "html_url": "https://github.com/huggingface/datasets/pull/3873", "diff_url": "https://github.com/huggingface/datasets/pull/3873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3873.patch", "merged_at": "2022-03-10T16:45:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/3872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3872/comments
https://api.github.com/repos/huggingface/datasets/issues/3872/events
https://github.com/huggingface/datasets/issues/3872
1,163,853,026
I_kwDODunzps5FXvzi
3,872
HTTP error 504 Server Error: Gateway Time-out
{ "login": "illiyas-sha", "id": 83509215, "node_id": "MDQ6VXNlcjgzNTA5MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/83509215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/illiyas-sha", "html_url": "https://github.com/illiyas-sha", "followers_url": "https://api.github.com/users/illiyas-sha/followers", "following_url": "https://api.github.com/users/illiyas-sha/following{/other_user}", "gists_url": "https://api.github.com/users/illiyas-sha/gists{/gist_id}", "starred_url": "https://api.github.com/users/illiyas-sha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/illiyas-sha/subscriptions", "organizations_url": "https://api.github.com/users/illiyas-sha/orgs", "repos_url": "https://api.github.com/users/illiyas-sha/repos", "events_url": "https://api.github.com/users/illiyas-sha/events{/privacy}", "received_events_url": "https://api.github.com/users/illiyas-sha/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "is pushing directly with git (and git-lfs) an option for you?", "I have installed git-lfs and doing this push with that\r\n", "yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`?", "Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line", "cc @lhoestq @albertvillanova @LysandreJik because maybe I'm giving dumb advice here πŸ˜… ", "`push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.\r\n\r\nRegarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`." ]
"2022-03-09T12:03:37"
"2022-03-15T16:19:50"
"2022-03-15T16:19:50"
NONE
null
I am trying to push a large dataset(450000+) records with the help of `push_to_hub()` While pushing, it gives some error like this. ``` Traceback (most recent call last): File "data_split_speech.py", line 159, in <module> data_new_2.push_to_hub("user-name/dataset-name",private=True) File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub api.upload_file( File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file raise err File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file r.raise_for_status() File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet ``` Can anyone help me to resolve this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3872/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3872/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3871/comments
https://api.github.com/repos/huggingface/datasets/issues/3871/events
https://github.com/huggingface/datasets/pull/3871
1,163,714,113
PR_kwDODunzps40KRcM
3,871
add pandas to env command
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3871). All of your documentation changes will be reflected on that endpoint.", "Think failures are unrelated - feel free to merge whenever you want :-)" ]
"2022-03-09T09:48:51"
"2022-03-09T11:21:38"
"2022-03-09T11:21:37"
CONTRIBUTOR
null
Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3871/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3871", "html_url": "https://github.com/huggingface/datasets/pull/3871", "diff_url": "https://github.com/huggingface/datasets/pull/3871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3871.patch", "merged_at": "2022-03-09T11:21:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3870/comments
https://api.github.com/repos/huggingface/datasets/issues/3870/events
https://github.com/huggingface/datasets/pull/3870
1,163,633,239
PR_kwDODunzps40KAYy
3,870
Add wikitablequestions dataset
{ "login": "SivilTaram", "id": 10275209, "node_id": "MDQ6VXNlcjEwMjc1MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/10275209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SivilTaram", "html_url": "https://github.com/SivilTaram", "followers_url": "https://api.github.com/users/SivilTaram/followers", "following_url": "https://api.github.com/users/SivilTaram/following{/other_user}", "gists_url": "https://api.github.com/users/SivilTaram/gists{/gist_id}", "starred_url": "https://api.github.com/users/SivilTaram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SivilTaram/subscriptions", "organizations_url": "https://api.github.com/users/SivilTaram/orgs", "repos_url": "https://api.github.com/users/SivilTaram/repos", "events_url": "https://api.github.com/users/SivilTaram/events{/privacy}", "received_events_url": "https://api.github.com/users/SivilTaram/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq Would you mind reviewing it when you're available? Thanks!\r\n", "> Awesome thanks for adding this dataset ! :) The dataset script and dataset cards look pretty good\r\n> \r\n> It looks like your `dummy_data.zip` files are quite big though (>1MB each), do you think we can reduce their sizes ? This way this git repository doesn't become too big\r\n\r\nI have manually reduced the `dummy_data.zip` and its current size is about 54KB. Hope it is fine for you!", "@lhoestq I think the dataset is ready to merge now. Any follow-up question is welcome :-D", "> Thanks ! It looks all good now :)\r\n\r\nAwesome! Thanks for your quick response!" ]
"2022-03-09T08:27:43"
"2022-03-14T11:19:24"
"2022-03-14T11:16:19"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3870/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3870", "html_url": "https://github.com/huggingface/datasets/pull/3870", "diff_url": "https://github.com/huggingface/datasets/pull/3870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3870.patch", "merged_at": "2022-03-14T11:16:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/3869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3869/comments
https://api.github.com/repos/huggingface/datasets/issues/3869/events
https://github.com/huggingface/datasets/issues/3869
1,163,434,800
I_kwDODunzps5FWJsw
3,869
Making the Hub the place for datasets in Portuguese
{ "login": "omarespejel", "id": 4755430, "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omarespejel", "html_url": "https://github.com/omarespejel", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "repos_url": "https://api.github.com/users/omarespejel/repos", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many datasets, I would suggest to either create an issue for their datasets, or even better, we are trying to push to upload datasets as community datasets instead of adding them to the core library as guided in https://huggingface.co/docs/datasets/share. That would have the additional benefit that the dataset would live under the NILC organization.\r\n\r\n@lhoestq correct me if I'm wrong please πŸ˜„ " ]
"2022-03-09T03:06:18"
"2022-03-09T09:04:09"
null
NONE
null
Let's make Hugging Face Datasets the central hub for datasets in Portuguese :) **Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community. What are some datasets in Portuguese worth integrating into the Hugging Face hub? Special thanks to @augusnunes for his collaboration on identifying the first ones: - [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). cc @osanseviero
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3869/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3868/comments
https://api.github.com/repos/huggingface/datasets/issues/3868/events
https://github.com/huggingface/datasets/pull/3868
1,162,914,114
PR_kwDODunzps40HnWA
3,868
Ignore duplicate keys if `ignore_verifications=True`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3868). All of your documentation changes will be reflected on that endpoint.", "Cool thanks ! Could you add a test please ?" ]
"2022-03-08T17:14:56"
"2022-03-09T13:50:45"
"2022-03-09T13:50:44"
CONTRIBUTOR
null
Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3868/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3868", "html_url": "https://github.com/huggingface/datasets/pull/3868", "diff_url": "https://github.com/huggingface/datasets/pull/3868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3868.patch", "merged_at": "2022-03-09T13:50:44" }
true
https://api.github.com/repos/huggingface/datasets/issues/3867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3867/comments
https://api.github.com/repos/huggingface/datasets/issues/3867/events
https://github.com/huggingface/datasets/pull/3867
1,162,896,605
PR_kwDODunzps40Hjrk
3,867
Update for the rename doc-builder -> hf-doc-utils
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "why utils? it's a builder no?", "~~@julien-c there was a vote πŸ™‚ https://huggingface.slack.com/archives/C021H1P1HKR/p1646405136644739~~\r\n\r\noh I see you already commeented in the thread as well", "Thanks ! It looks all good to me (provided `hf-doc-utils` is the name we keep in the end). I'm fine with this name, and `hf-doc-builder` is also fine IMHO", "ok, this is definitely not a hill I'll die on =) @mishig25 @sgugger " ]
"2022-03-08T16:58:25"
"2023-09-24T09:54:44"
"2022-03-08T17:30:45"
CONTRIBUTOR
null
This PR adapts the job to the upcoming change of name of `doc-builder`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3867/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3867", "html_url": "https://github.com/huggingface/datasets/pull/3867", "diff_url": "https://github.com/huggingface/datasets/pull/3867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3867.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3866/comments
https://api.github.com/repos/huggingface/datasets/issues/3866/events
https://github.com/huggingface/datasets/pull/3866
1,162,833,848
PR_kwDODunzps40HWcu
3,866
Bring back imgs so that forsk dont get broken
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3866). All of your documentation changes will be reflected on that endpoint.", "I think we just need to keep `datasets_logo_name.jpg` and `course_banner.png` because they appear in the README.md of the forks of `datasets`. The other images can be removed", "Force pushed those two imgs only" ]
"2022-03-08T16:01:31"
"2022-03-08T17:37:02"
"2022-03-08T17:37:01"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3866/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3866", "html_url": "https://github.com/huggingface/datasets/pull/3866", "diff_url": "https://github.com/huggingface/datasets/pull/3866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3866.patch", "merged_at": "2022-03-08T17:37:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/3865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3865/comments
https://api.github.com/repos/huggingface/datasets/issues/3865/events
https://github.com/huggingface/datasets/pull/3865
1,162,821,908
PR_kwDODunzps40HT9K
3,865
Add logo img
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3865). All of your documentation changes will be reflected on that endpoint.", "Superceded by https://github.com/huggingface/datasets/pull/3866" ]
"2022-03-08T15:50:59"
"2023-09-24T09:54:31"
"2022-03-08T16:01:59"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3865/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3865", "html_url": "https://github.com/huggingface/datasets/pull/3865", "diff_url": "https://github.com/huggingface/datasets/pull/3865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3865.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3864/comments
https://api.github.com/repos/huggingface/datasets/issues/3864/events
https://github.com/huggingface/datasets/pull/3864
1,162,804,942
PR_kwDODunzps40HQZ_
3,864
Update image dataset tags
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3864). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-08T15:36:32"
"2022-03-08T17:04:47"
"2022-03-08T17:04:46"
CONTRIBUTOR
null
Align the existing image datasets' tags with new tags introduced in #3800.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3864/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3864", "html_url": "https://github.com/huggingface/datasets/pull/3864", "diff_url": "https://github.com/huggingface/datasets/pull/3864.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3864.patch", "merged_at": "2022-03-08T17:04:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/3863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3863/comments
https://api.github.com/repos/huggingface/datasets/issues/3863/events
https://github.com/huggingface/datasets/pull/3863
1,162,802,857
PR_kwDODunzps40HP-A
3,863
Update code blocks
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3863). All of your documentation changes will be reflected on that endpoint." ]
"2022-03-08T15:34:43"
"2022-03-09T16:45:30"
"2022-03-09T16:45:29"
MEMBER
null
Following https://github.com/huggingface/datasets/pull/3860#issuecomment-1061756712 and https://github.com/huggingface/datasets/pull/3690 we need to update the code blocks to use markdown instead of sphinx
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3863/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3863", "html_url": "https://github.com/huggingface/datasets/pull/3863", "diff_url": "https://github.com/huggingface/datasets/pull/3863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3863.patch", "merged_at": "2022-03-09T16:45:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/3862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3862/comments
https://api.github.com/repos/huggingface/datasets/issues/3862/events
https://github.com/huggingface/datasets/pull/3862
1,162,753,733
PR_kwDODunzps40HFht
3,862
Manipulate columns on IterableDataset (rename columns, cast, etc.)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3862). All of your documentation changes will be reflected on that endpoint.", "> IIUC we check if columns are present/not present directly in the yielded examples and not in info.features because info.features can be None (after map, for instance)?\r\n\r\nYes exactly\r\n\r\n> We should develop a solution that ensures info.features is never None. For example, one approach would be to infer them from examples in map and make them promotable from Value(\"null\") to a specific type, in case of None values.\r\n\r\nI agree this would be useful. Though inferring the type requires to start streaming some data, which takes a few seconds (compared to being instantaneous right now).\r\n\r\nLet's discuss this in a new issue maybe ?" ]
"2022-03-08T14:53:57"
"2022-03-10T16:40:22"
"2022-03-10T16:40:21"
MEMBER
null
I added: - add_column - cast - rename_column - rename_columns related to https://github.com/huggingface/datasets/issues/3444 TODO: - [x] docs - [x] tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3862/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3862", "html_url": "https://github.com/huggingface/datasets/pull/3862", "diff_url": "https://github.com/huggingface/datasets/pull/3862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3862.patch", "merged_at": "2022-03-10T16:40:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/3861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3861/comments
https://api.github.com/repos/huggingface/datasets/issues/3861/events
https://github.com/huggingface/datasets/issues/3861
1,162,702,044
I_kwDODunzps5FTWzc
3,861
big_patent cased version
{ "login": "slvcsl", "id": 25265140, "node_id": "MDQ6VXNlcjI1MjY1MTQw", "avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slvcsl", "html_url": "https://github.com/slvcsl", "followers_url": "https://api.github.com/users/slvcsl/followers", "following_url": "https://api.github.com/users/slvcsl/following{/other_user}", "gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}", "starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions", "organizations_url": "https://api.github.com/users/slvcsl/orgs", "repos_url": "https://api.github.com/users/slvcsl/repos", "events_url": "https://api.github.com/users/slvcsl/events{/privacy}", "received_events_url": "https://api.github.com/users/slvcsl/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "To follow up on this: the cased and uncased versions actually contain different content, and the cased one is easier since it contains a Summary of the Invention in the input.\r\n\r\nSee the paper describing the issue here:\r\nhttps://aclanthology.org/2022.gem-1.34/", "Thanks for proposing the addition of the cased version of this dataset and for pinging again recently.\r\n\r\nI have just merged a PR that adds the cased version: https://huggingface.co/datasets/big_patent/discussions/3\r\n\r\nThe cased version (2.1.2) is the default one:\r\n```python\r\nds = load_dataset(\"big_patent\", \"all\")\r\n```\r\n\r\nTo use the 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:\r\n```python\r\nds = load_dataset(\"big_patent\", codes=\"all\", version=\"1.0.0\")\r\n```\r\n\r\nClosed by: https://huggingface.co/datasets/big_patent/discussions/3" ]
"2022-03-08T14:08:55"
"2023-04-21T14:32:03"
"2023-04-21T14:32:03"
NONE
null
Hi! I am interested in working with the big_patent dataset. In Tensorflow, there are a number of versions of the dataset: - 1.0.0 : lower cased tokenized words - 2.0.0 : Update to use cased raw strings - 2.1.2 (default): Fix update to cased raw strings. The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3861/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3861/timeline
null
completed
null
null
false