url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.62B
node_id
stringlengths
18
32
number
int64
1
5.62k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
2 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4019/comments
https://api.github.com/repos/huggingface/datasets/issues/4019/events
https://github.com/huggingface/datasets/pull/4019
1,180,628,293
PR_kwDODunzps41AlFk
4,019
Make yelp_polarity streamable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-25T10:42:51"
"2022-03-25T15:02:19"
"2022-03-25T14:57:16"
MEMBER
null
It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4019/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4019", "html_url": "https://github.com/huggingface/datasets/pull/4019", "diff_url": "https://github.com/huggingface/datasets/pull/4019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4019.patch", "merged_at": "2022-03-25T14:57:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/4018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4018/comments
https://api.github.com/repos/huggingface/datasets/issues/4018/events
https://github.com/huggingface/datasets/pull/4018
1,180,622,816
PR_kwDODunzps41Aj7g
4,018
Replace yelp_review_full data url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-25T10:37:18"
"2022-03-25T15:01:02"
"2022-03-25T14:56:10"
MEMBER
null
I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive. Close https://github.com/huggingface/datasets/issues/4005
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4018/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4018", "html_url": "https://github.com/huggingface/datasets/pull/4018", "diff_url": "https://github.com/huggingface/datasets/pull/4018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4018.patch", "merged_at": "2022-03-25T14:56:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/4017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4017/comments
https://api.github.com/repos/huggingface/datasets/issues/4017/events
https://github.com/huggingface/datasets/pull/4017
1,180,595,160
PR_kwDODunzps41Ad_L
4,017
Support streaming scan dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-25T10:11:28"
"2022-03-25T12:08:55"
"2022-03-25T12:03:52"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4017/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4017", "html_url": "https://github.com/huggingface/datasets/pull/4017", "diff_url": "https://github.com/huggingface/datasets/pull/4017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4017.patch", "merged_at": "2022-03-25T12:03:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/4016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4016/comments
https://api.github.com/repos/huggingface/datasets/issues/4016/events
https://github.com/huggingface/datasets/pull/4016
1,180,557,828
PR_kwDODunzps41AWBk
4,016
Support streaming blimp dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-25T09:39:10"
"2022-03-25T11:19:18"
"2022-03-25T11:14:13"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4016/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4016", "html_url": "https://github.com/huggingface/datasets/pull/4016", "diff_url": "https://github.com/huggingface/datasets/pull/4016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4016.patch", "merged_at": "2022-03-25T11:14:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/4015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4015/comments
https://api.github.com/repos/huggingface/datasets/issues/4015/events
https://github.com/huggingface/datasets/issues/4015
1,180,510,856
I_kwDODunzps5GXSqI
4,015
Can not correctly parse the classes with imagefolder
{ "login": "YiSyuanChen", "id": 21264909, "node_id": "MDQ6VXNlcjIxMjY0OTA5", "avatar_url": "https://avatars.githubusercontent.com/u/21264909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YiSyuanChen", "html_url": "https://github.com/YiSyuanChen", "followers_url": "https://api.github.com/users/YiSyuanChen/followers", "following_url": "https://api.github.com/users/YiSyuanChen/following{/other_user}", "gists_url": "https://api.github.com/users/YiSyuanChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/YiSyuanChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YiSyuanChen/subscriptions", "organizations_url": "https://api.github.com/users/YiSyuanChen/orgs", "repos_url": "https://api.github.com/users/YiSyuanChen/repos", "events_url": "https://api.github.com/users/YiSyuanChen/events{/privacy}", "received_events_url": "https://api.github.com/users/YiSyuanChen/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.", "HI, I have a question. How much time did you load the ImageNet data files? " ]
"2022-03-25T08:51:17"
"2022-03-28T01:02:03"
"2022-03-25T09:27:56"
NONE
null
## Describe the bug I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect. ## Steps to reproduce the bug I organized my dataset (ImageNet) in the following structure: ``` - imagenet/ - train/ - n01440764/ - ILSVRC2012_val_00000293.jpg - ...... - n01695060/ - ...... - val/ - n01440764/ - n01695060/ - ...... ``` At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as: ``` from datasets import load_dataset data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'} ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification") ``` but it resulted following error (I mask my personal path as <PERSONAL_PATH>): ``` FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` Next, I followed a recent issue #3960 to load data as: ``` from datasets import load_dataset data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']} ds = load_dataset("imagefolder", data_files=data_files, task="image-classification") ``` and the data can be loaded without error as: (I copy val folder to train folder for illustration) ``` >>> ds DatasetDict({ train: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) val: Dataset({ features: ['image', 'labels'], num_rows: 50000 }) }) ``` However, the parsed classes is wrong (should be 1000 classes): ``` >>> ds["train"].features {'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)} ``` ## Expected results I expect that the "labels" in ds["train"].features should contain 1000 classes. ## Actual results The "labels" in ds["train"].features contains only 1 wrong class. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Ubuntu 18.04 - Python version: Python 3.7.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4015/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4014/comments
https://api.github.com/repos/huggingface/datasets/issues/4014/events
https://github.com/huggingface/datasets/pull/4014
1,180,481,229
PR_kwDODunzps41AGBu
4,014
Support streaming id_clickbait dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-25T08:18:28"
"2022-03-25T08:58:31"
"2022-03-25T08:53:32"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4014/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4014", "html_url": "https://github.com/huggingface/datasets/pull/4014", "diff_url": "https://github.com/huggingface/datasets/pull/4014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4014.patch", "merged_at": "2022-03-25T08:53:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/4013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4013/comments
https://api.github.com/repos/huggingface/datasets/issues/4013/events
https://github.com/huggingface/datasets/issues/4013
1,180,427,174
I_kwDODunzps5GW-Om
4,013
Cannot preview "hazal/Turkish-Biomedical-corpus-trM"
{ "login": "hazalturkmen", "id": 42860397, "node_id": "MDQ6VXNlcjQyODYwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/42860397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hazalturkmen", "html_url": "https://github.com/hazalturkmen", "followers_url": "https://api.github.com/users/hazalturkmen/followers", "following_url": "https://api.github.com/users/hazalturkmen/following{/other_user}", "gists_url": "https://api.github.com/users/hazalturkmen/gists{/gist_id}", "starred_url": "https://api.github.com/users/hazalturkmen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hazalturkmen/subscriptions", "organizations_url": "https://api.github.com/users/hazalturkmen/orgs", "repos_url": "https://api.github.com/users/hazalturkmen/repos", "events_url": "https://api.github.com/users/hazalturkmen/events{/privacy}", "received_events_url": "https://api.github.com/users/hazalturkmen/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file extensions. However, your data file does not have any extension.\r\n\r\nNote that current supported data file extensions are: 'csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'.\r\n\r\nYou have more info on our docs: [How to share a dataset](https://huggingface.co/docs/datasets/share).", "thanks for reply :)" ]
"2022-03-25T07:12:02"
"2022-04-04T08:05:01"
"2022-03-25T14:16:11"
NONE
null
## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM' **Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM* *I cannot see the dataset preview.* ``` Server Error Status code: 400 Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4013/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4012/comments
https://api.github.com/repos/huggingface/datasets/issues/4012/events
https://github.com/huggingface/datasets/pull/4012
1,180,350,083
PR_kwDODunzps40_qgo
4,012
Rename wer to cer
{ "login": "pmgautam", "id": 28428143, "node_id": "MDQ6VXNlcjI4NDI4MTQz", "avatar_url": "https://avatars.githubusercontent.com/u/28428143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pmgautam", "html_url": "https://github.com/pmgautam", "followers_url": "https://api.github.com/users/pmgautam/followers", "following_url": "https://api.github.com/users/pmgautam/following{/other_user}", "gists_url": "https://api.github.com/users/pmgautam/gists{/gist_id}", "starred_url": "https://api.github.com/users/pmgautam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pmgautam/subscriptions", "organizations_url": "https://api.github.com/users/pmgautam/orgs", "repos_url": "https://api.github.com/users/pmgautam/repos", "events_url": "https://api.github.com/users/pmgautam/events{/privacy}", "received_events_url": "https://api.github.com/users/pmgautam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-25T05:06:05"
"2022-03-28T13:57:25"
"2022-03-28T13:57:25"
CONTRIBUTOR
null
wer variable changed to cer in README file
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4012/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4012", "html_url": "https://github.com/huggingface/datasets/pull/4012", "diff_url": "https://github.com/huggingface/datasets/pull/4012.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4012.patch", "merged_at": "2022-03-28T13:57:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/4010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4010/comments
https://api.github.com/repos/huggingface/datasets/issues/4010/events
https://github.com/huggingface/datasets/pull/4010
1,179,848,036
PR_kwDODunzps409_QV
4,010
Fix None issue with Sequence of dict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-24T17:58:59"
"2022-03-28T10:13:53"
"2022-03-28T10:08:40"
MEMBER
null
`Features.encode_example` currently fails if it contains a sequence if dict like `Sequence({"subcolumn": Value("int32")})` and if `None` is passed instead of the dict. ```python File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 1310, in encode_example return encode_nested_example(self, example) File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in encode_nested_example return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 973, in <dictcomp> return {k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in zip_dict(schema, obj)} File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/features/features.py", line 998, in encode_nested_example for k, (sub_schema, sub_objs) in zip_dict(schema.feature, obj): File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in zip_dict yield key, tuple(d[key] for d in dicts) File "/Users/quentinlhoest/Desktop/hf/datasets/src/datasets/utils/py_utils.py", line 207, in <genexpr> yield key, tuple(d[key] for d in dicts) TypeError: 'NoneType' object is not subscriptable ``` I fixed this issue and updated the tests (this case was missing in the tests)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4010/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4010", "html_url": "https://github.com/huggingface/datasets/pull/4010", "diff_url": "https://github.com/huggingface/datasets/pull/4010.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4010.patch", "merged_at": "2022-03-28T10:08:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/4009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4009/comments
https://api.github.com/repos/huggingface/datasets/issues/4009/events
https://github.com/huggingface/datasets/issues/4009
1,179,658,611
I_kwDODunzps5GUClz
4,009
AMI load_dataset error: sndfile library not found
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Issue unresolved, see [4000](https://github.com/huggingface/datasets/issues/4009#issue-1179658611)" ]
"2022-03-24T15:13:38"
"2022-03-24T15:46:38"
"2022-03-24T15:17:29"
NONE
null
## Describe the bug Getting error message when loading AMI dataset. ## Steps to reproduce the bug `python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ` ## Expected results A clear and concise description of the expected results. ## Actual results Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4009/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4008/comments
https://api.github.com/repos/huggingface/datasets/issues/4008/events
https://github.com/huggingface/datasets/pull/4008
1,179,591,068
PR_kwDODunzps409Ixp
4,008
Support streaming daily_dialog dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-24T14:23:23"
"2022-03-24T15:29:01"
"2022-03-24T14:46:58"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4008/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4008", "html_url": "https://github.com/huggingface/datasets/pull/4008", "diff_url": "https://github.com/huggingface/datasets/pull/4008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4008.patch", "merged_at": "2022-03-24T14:46:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/4007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4007/comments
https://api.github.com/repos/huggingface/datasets/issues/4007/events
https://github.com/huggingface/datasets/issues/4007
1,179,381,021
I_kwDODunzps5GS-0d
4,007
set_format does not work with multi dimension tensor
{ "login": "phihung", "id": 5902432, "node_id": "MDQ6VXNlcjU5MDI0MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phihung", "html_url": "https://github.com/phihung", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "organizations_url": "https://api.github.com/users/phihung/orgs", "repos_url": "https://api.github.com/users/phihung/repos", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "received_events_url": "https://api.github.com/users/phihung/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Use the `ArrayXD` feature type (where X is the number of dimensions) to get correctly formated tensors. So in your case, define the dataset as follows :\r\n```python\r\nds = Dataset.from_dict({\"A\": [torch.rand((2, 2))]}, features=Features({\"A\": Array2D(shape=(2, 2), dtype=\"float32\")}))\r\n```\r\n", "Hi @mariosasko I'm facing the same issue and the only work around I've found so far is to convert my `DatasetDict` to a dictionary and then create new objects with `Dataset.from_dict`.\r\n```\r\ndataset = load_dataset(\"my_dataset.py\")\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndict_dataset_test = dataset[\"test\"].to_dict()\r\n...\r\ndataset_test = Dataset.from_dict(dict_dataset_test, features=Features(features))\r\n```\r\nHowever, converting a `Dataset` object to a dict takes quite a lot of time and memory... Is there a way to directly create an `Array2D` without having to transform the original `Dataset` to a dict?", "Hi! Yes, you can directly pass the `Features` dictionary as `features` in `map` to cast the column to `Array2D`:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example), features=Features(features))\r\n```\r\nOr you can use `cast` after `map` to do that:\r\n```python\r\ndataset = dataset.map(lambda example: blabla(example))\r\ndataset = dataset.cast(Features(features))\r\n```", "Fantastic thank you @mariosasko\r\nThe first option you suggested is indeed way faster 😃 " ]
"2022-03-24T11:27:43"
"2022-03-30T07:28:57"
"2022-03-24T14:39:29"
NONE
null
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result ds = ds.with_format("torch") print(ds[0]) ``` ## Expected results ``` {'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]} ``` ## Actual results ``` {'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - datasets version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4007/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4006/comments
https://api.github.com/repos/huggingface/datasets/issues/4006/events
https://github.com/huggingface/datasets/pull/4006
1,179,367,195
PR_kwDODunzps408YnW
4,006
Use audio feature in ASR task template
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-24T11:15:22"
"2022-03-24T17:19:29"
"2022-03-24T16:48:02"
MEMBER
null
The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column. I changed that and updated all the datasets as well as the tests. The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero usage unfortunately (probably because users load the duplicate `multilingual_librispeech` directly instead), but it means we can update it. (this makes me think that we should deprecate `multilingual_librispeech` it and redirect users to `facebook/multilingual_librispeech`). This PR is also useful for the AudioFolder in https://github.com/huggingface/datasets/pull/3963
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4006/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4006/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4006", "html_url": "https://github.com/huggingface/datasets/pull/4006", "diff_url": "https://github.com/huggingface/datasets/pull/4006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4006.patch", "merged_at": "2022-03-24T16:48:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/4005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4005/comments
https://api.github.com/repos/huggingface/datasets/issues/4005/events
https://github.com/huggingface/datasets/issues/4005
1,179,365,663
I_kwDODunzps5GS7Ef
4,005
Yelp not working
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I don't think it's an issue with the dataset-viewer. Maybe @lhoestq or @albertvillanova could confirm.\r\n\r\n```python\r\n>>> from datasets import load_dataset, DownloadMode\r\n>>> import itertools\r\n>>> # without streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.97MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nDownloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /home/slesage/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.10k/1.10k [00:00<00:00, 1.39MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1687, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1104, in _download_and_prepare\r\n super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 676, in _download_and_prepare\r\n verify_checksums(\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']\r\n\r\n>>> # with streaming\r\n>>> dataset = load_dataset(\"yelp_review_full\", name=\"yelp_review_full\", split=\"train\", download_mode=DownloadMode.FORCE_REDOWNLOAD, streaming=True)\r\n\r\nDownloading builder script: 4.39kB [00:00, 5.53MB/s]\r\nDownloading metadata: 2.13kB [00:00, 3.14MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 375, in _info\r\n await _file_info(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 736, in _file_info\r\n r.raise_for_status()\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/aiohttp/client_reqrep.py\", line 1000, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://doc-0g-bs-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/gklhpdq1arj8v15qrg7ces34a8c3413d/1648144575000/07511006523564980941/*/0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0?e=download')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1677, in load_dataset\r\n return builder_instance.as_streaming_dataset(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 906, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/yelp_review_full/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43/yelp_review_full.py\", line 102, in _split_generators\r\n data_dir = dl_manager.download_and_extract(my_urls)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 800, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 778, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/py_utils.py\", line 306, in map_nested\r\n return function(data_struct)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 783, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/home/slesage/hf/datasets/src/datasets/utils/streaming_download_manager.py\", line 372, in _get_extraction_protocol\r\n with fsspec.open(urlpath, **kwargs) as f:\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/core.py\", line 102, in __enter__\r\n f = self.fs.open(self.path, mode=mode)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/spec.py\", line 978, in open\r\n f = self._open(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 335, in _open\r\n size = size or self.info(path, **kwargs)[\"size\"]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 88, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 69, in sync\r\n raise result[0]\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/fsspec/implementations/http.py\", line 388, in _info\r\n raise FileNotFoundError(url) from exc\r\nFileNotFoundError: https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0&confirm=t\r\n```\r\n\r\nAnd this is before even trying to access the rows with\r\n\r\n```python\r\n>>> rows = list(itertools.islice(dataset, 100))\r\n>>> rows = list(dataset.take(100))\r\n```", "Yet another issue related to google drive not being nice. Most likely your IP has been banned from using their API programmatically. Do you know if we are allowed to host and redistribute the data ourselves ?", "Hi,\r\n\r\nFacing the same issue while loading the dataset: \r\n\r\n`Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files`\r\n\r\nThanks", "> Facing the same issue while loading the dataset:\r\n> \r\n> Error: {NonMatchingChecksumError}Checksums didn't match for dataset source files\r\n\r\nThanks for reporting. I think this is the same issue. Feel free to try again later, once Google Drive stopped blocking you. You can retry by passing `download_mode=\"force_redownload\"` to `load_dataset`", "I noticed that FastAI hosts the Yelp dataset at https://s3.amazonaws.com/fast-ai-nlp/yelp_review_full_csv.tgz (from their catalog [here](https://course.fast.ai/datasets))\r\n\r\nLet's update the yelp dataset script to download from there instead of Google Drive", "I updated the link to not use Google Drive anymore, we will do a release early next week with the updated download url of the dataset :)" ]
"2022-03-24T11:14:00"
"2022-03-25T14:59:57"
"2022-03-25T14:56:10"
MEMBER
null
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly copy of the dataset: https://huggingface.co/datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co/datasets/yelp_review_full has > 20K downloads.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4005/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4004/comments
https://api.github.com/repos/huggingface/datasets/issues/4004/events
https://github.com/huggingface/datasets/pull/4004
1,179,320,795
PR_kwDODunzps408Onj
4,004
ASSIN 2 dataset: replace broken Google Drive _URLS by links on github
{ "login": "ruanchaves", "id": 14352388, "node_id": "MDQ6VXNlcjE0MzUyMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanchaves", "html_url": "https://github.com/ruanchaves", "followers_url": "https://api.github.com/users/ruanchaves/followers", "following_url": "https://api.github.com/users/ruanchaves/following{/other_user}", "gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions", "organizations_url": "https://api.github.com/users/ruanchaves/orgs", "repos_url": "https://api.github.com/users/ruanchaves/repos", "events_url": "https://api.github.com/users/ruanchaves/events{/privacy}", "received_events_url": "https://api.github.com/users/ruanchaves/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-24T10:37:39"
"2022-03-28T14:01:46"
"2022-03-28T13:56:39"
CONTRIBUTOR
null
Closes #4003 . Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4004/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4004", "html_url": "https://github.com/huggingface/datasets/pull/4004", "diff_url": "https://github.com/huggingface/datasets/pull/4004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4004.patch", "merged_at": "2022-03-28T13:56:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/4003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4003/comments
https://api.github.com/repos/huggingface/datasets/issues/4003/events
https://github.com/huggingface/datasets/issues/4003
1,179,286,877
I_kwDODunzps5GSn1d
4,003
ASSIN2 dataset checksum bug
{ "login": "ruanchaves", "id": 14352388, "node_id": "MDQ6VXNlcjE0MzUyMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruanchaves", "html_url": "https://github.com/ruanchaves", "followers_url": "https://api.github.com/users/ruanchaves/followers", "following_url": "https://api.github.com/users/ruanchaves/following{/other_user}", "gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions", "organizations_url": "https://api.github.com/users/ruanchaves/orgs", "repos_url": "https://api.github.com/users/ruanchaves/repos", "events_url": "https://api.github.com/users/ruanchaves/events{/privacy}", "received_events_url": "https://api.github.com/users/ruanchaves/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Using latest code, I am still facing the issue.\r\n\r\n```python\r\n(base) vimos@vimosmu ➜ ~ ipython\r\nPython 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: load_dataset(\"assin2\")\r\nDownloading builder script: 4.24kB [00:00, 244kB/s]\r\nDownloading metadata: 2.58kB [00:00, 2.19MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset assin2/default (download: 2.02 MiB, generated: 1.21 MiB, post-processed: Unknown size, total: 3.23 MiB) to /home/vimos/.cache/huggingface/datasets/assin2/default/1.0.0/8467f7acbda82f62ab960ca869dc1e96350e0e103a1ef7eaa43bbee530b80061...\r\nDownloading data: 1.51MB [00:00, 102MB/s]\r\nDownloading data: 116kB [00:00, 63.6MB/s]\r\nDownloading data: 493kB [00:00, 95.8MB/s] \r\nDownloading data files: 100%|██████████████████████████████████████████| 3/3 [00:00<00:00, 8.27it/s]\r\n---------------------------------------------------------------------------\r\nExpectedMoreDownloadedFiles Traceback (most recent call last)\r\n<ipython-input-2-b367d1ffd68e> in <module>\r\n----> 1 load_dataset(\"assin2\")\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1694 ignore_verifications=ignore_verifications,\r\n 1695 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1696 use_auth_token=use_auth_token,\r\n 1697 )\r\n 1698\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 604 if not downloaded_from_gcs:\r\n 605 self._download_and_prepare(\r\n--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 607 )\r\n 608 # Sync info\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1102\r\n 1103 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1105\r\n 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 675 if verify_infos:\r\n 676 verify_checksums(\r\n--> 677 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 678 )\r\n 679\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 31 return\r\n 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:\r\n---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:\r\n 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))\r\n\r\nExpectedMoreDownloadedFiles: {'https://drive.google.com/u/0/uc?id=1kb7xq6Mb3eaqe9cOAo70BaG9ypwkIqEU&export=download', 'https://drive.google.com/u/0/uc?id=1J3FpQaHxpM-FDfBUyooh-sZF-B-bM_lU&export=download', 'https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'}\r\n```", "That's true. Steps to reproduce the bug on Google Colab:\r\n\r\n```\r\ngit clone https://github.com/huggingface/datasets.git\r\ncd datasets\r\npip install -e .\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nHowever the dataset will load without any problems if you just install version 2.0.0:\r\n\r\n ```\r\npip install datasets\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nAny thoughts @lhoestq ?", "Right indeed ! Let me open a PR to fix this.\r\nThe dataset_infos.json file that stores some metadata about the dataset to download (and is used to verify it was correctly downloaded) hasn't been updated correctly", "Not sure what the status of this is, but personally I am still getting this error, with glue.", "Can you open a new issue if you got an error with glue please ?", "Have posted at #4241" ]
"2022-03-24T10:08:50"
"2022-04-27T14:14:45"
"2022-03-28T13:56:39"
CONTRIBUTOR
null
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) [<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>() ----> 1 load_dataset('assin2') 4 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'] ``` ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("assin2") ``` ## Expected results Load the dataset. ## Actual results The dataset won't load. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Google Colab - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4003/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4002/comments
https://api.github.com/repos/huggingface/datasets/issues/4002/events
https://github.com/huggingface/datasets/pull/4002
1,179,263,787
PR_kwDODunzps408Cfp
4,002
Support streaming conll2012_ontonotesv5 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-24T09:49:56"
"2022-03-24T10:53:41"
"2022-03-24T10:48:47"
MEMBER
null
Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4002/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4002", "html_url": "https://github.com/huggingface/datasets/pull/4002", "diff_url": "https://github.com/huggingface/datasets/pull/4002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4002.patch", "merged_at": "2022-03-24T10:48:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/4001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4001/comments
https://api.github.com/repos/huggingface/datasets/issues/4001/events
https://github.com/huggingface/datasets/issues/4001
1,179,231,418
I_kwDODunzps5GSaS6
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
{ "login": "gsk1692", "id": 1963097, "node_id": "MDQ6VXNlcjE5NjMwOTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1963097?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsk1692", "html_url": "https://github.com/gsk1692", "followers_url": "https://api.github.com/users/gsk1692/followers", "following_url": "https://api.github.com/users/gsk1692/following{/other_user}", "gists_url": "https://api.github.com/users/gsk1692/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsk1692/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsk1692/subscriptions", "organizations_url": "https://api.github.com/users/gsk1692/orgs", "repos_url": "https://api.github.com/users/gsk1692/repos", "events_url": "https://api.github.com/users/gsk1692/events{/privacy}", "received_events_url": "https://api.github.com/users/gsk1692/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.", "Thank You! Was able to solve with the help of this.", "But I request you to please fix the same in the dataset hub explorer as well...", "May I ask how to get this dataset?" ]
"2022-03-24T09:21:51"
"2022-03-26T09:48:21"
"2022-03-26T03:35:43"
NONE
null
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format') Error: Status code: 400 Exception: TypeError Message: argument of type 'Value' is not iterable Kindly advice.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4001/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4000/comments
https://api.github.com/repos/huggingface/datasets/issues/4000/events
https://github.com/huggingface/datasets/issues/4000
1,178,844,616
I_kwDODunzps5GQ73I
4,000
load_dataset error: sndfile library not found
{ "login": "i-am-neo", "id": 102043285, "node_id": "U_kgDOBhUOlQ", "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-neo", "html_url": "https://github.com/i-am-neo", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "repos_url": "https://api.github.com/users/i-am-neo/repos", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @i-am-neo,\r\n\r\nThe audio support is an extra feature of `datasets` and therefore it must be installed as an additional optional dependency:\r\n```shell\r\npip install datasets[audio]\r\n```\r\nAdditionally, for specific MP3 support (which is not the case for AMI dataset, that contains WAV audio files), there is another third-party dependency on `torchaudio`.\r\n\r\nYou have all the information in our docs: https://huggingface.co/docs/datasets/audio_process#installation", "Thanks @albertvillanova . Unfortunately the error persists after installing ```datasets[audio]```. Can you direct towards a solution?\r\n\r\n```\r\npip3 install datasets[audio]\r\n```\r\n### log\r\nRequirement already satisfied: datasets[audio] in ./.virtualenvs/hubert/lib/python3.7/site-packages (1.18.3)\r\nRequirement already satisfied: numpy>=1.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.21.5)\r\nRequirement already satisfied: xxhash in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.0.0)\r\nRequirement already satisfied: fsspec[http]>=2021.05.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2022.2.0)\r\nRequirement already satisfied: dill in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.3.4)\r\nRequirement already satisfied: pandas in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (1.3.5)\r\nRequirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.4.0)\r\nRequirement already satisfied: packaging in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (21.3)\r\nRequirement already satisfied: multiprocess in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.70.12.2)\r\nRequirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (7.0.0)\r\nRequirement already satisfied: tqdm>=4.62.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.63.1)\r\nRequirement already satisfied: aiohttp in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (3.8.1)\r\nRequirement already satisfied: importlib-metadata in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (4.11.3)\r\nRequirement already satisfied: requests>=2.19.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (2.27.1)\r\nRequirement already satisfied: librosa in ./.virtualenvs/hubert/lib/python3.7/site-packages (from datasets[audio]) (0.9.1)\r\nRequirement already satisfied: pyyaml in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (6.0)\r\nRequirement already satisfied: typing-extensions>=3.7.4.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (4.1.1)\r\nRequirement already satisfied: filelock in ./.virtualenvs/hubert/lib/python3.7/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets[audio]) (3.6.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from packaging->datasets[audio]) (3.0.7)\r\nRequirement already satisfied: idna<4,>=2.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (3.3)\r\nRequirement already satisfied: certifi>=2017.4.17 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2021.10.8)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (2.0.12)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->datasets[audio]) (1.26.9)\r\nRequirement already satisfied: attrs>=17.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (21.4.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.3.0)\r\nRequirement already satisfied: aiosignal>=1.1.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.2.0)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (1.7.2)\r\nRequirement already satisfied: asynctest==0.13.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (0.13.0)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (6.0.2)\r\nRequirement already satisfied: async-timeout<5.0,>=4.0.0a3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from aiohttp->datasets[audio]) (4.0.2)\r\nRequirement already satisfied: zipp>=0.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from importlib-metadata->datasets[audio]) (3.7.0)\r\nRequirement already satisfied: decorator>=4.0.10 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (5.1.1)\r\nRequirement already satisfied: soundfile>=0.10.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.55.1)\r\nRequirement already satisfied: pooch>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.6.0)\r\nRequirement already satisfied: resampy>=0.2.2 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (0.2.2)\r\nRequirement already satisfied: audioread>=2.1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.1.0)\r\nRequirement already satisfied: scipy>=1.2.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.7.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from librosa->datasets[audio]) (1.0.2)\r\nRequirement already satisfied: python-dateutil>=2.7.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2.8.2)\r\nRequirement already satisfied: pytz>=2017.3 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pandas->datasets[audio]) (2022.1)\r\nRequirement already satisfied: setuptools in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa->datasets[audio]) (0.38.0)\r\nRequirement already satisfied: appdirs>=1.3.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa->datasets[audio]) (1.4.4)\r\nRequirement already satisfied: six>=1.5 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas->datasets[audio]) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa->datasets[audio]) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in ./.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa->datasets[audio]) (1.15.0)\r\nRequirement already satisfied: pycparser in ./.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa->datasets[audio]) (2.21)\r\n\r\n### reload\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### log\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n\r\n### just to double-check as per your docs\r\n```\r\npip3 install librosa torchaudio\r\n```\r\n\r\n### logs\r\nRequirement already satisfied: librosa in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.9.1)\r\nRequirement already satisfied: torchaudio in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (0.11.0+cu113)\r\nRequirement already satisfied: audioread>=2.1.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (2.1.9)\r\nRequirement already satisfied: joblib>=0.14 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.1.0)\r\nRequirement already satisfied: packaging>=20.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (21.3)\r\nRequirement already satisfied: scikit-learn>=0.19.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.0.2)\r\nRequirement already satisfied: scipy>=1.2.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.7.3)\r\nRequirement already satisfied: decorator>=4.0.10 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (5.1.1)\r\nRequirement already satisfied: resampy>=0.2.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.2.2)\r\nRequirement already satisfied: pooch>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.6.0)\r\nRequirement already satisfied: numpy>=1.17.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (1.21.5)\r\nRequirement already satisfied: soundfile>=0.10.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.10.3.post1)\r\nRequirement already satisfied: numba>=0.45.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from librosa) (0.55.1)\r\nRequirement already satisfied: torch==1.11.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torchaudio) (1.11.0+cu113)\r\nRequirement already satisfied: typing-extensions in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from torch==1.11.0->torchaudio) (4.1.1)\r\nRequirement already satisfied: setuptools in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (60.10.0)\r\nRequirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from numba>=0.45.1->librosa) (0.38.0)\r\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from packaging>=20.0->librosa) (3.0.7)\r\nRequirement already satisfied: requests>=2.19.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (2.27.1)\r\nRequirement already satisfied: appdirs>=1.3.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from pooch>=1.0->librosa) (1.4.4)\r\nRequirement already satisfied: six>=1.3 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from resampy>=0.2.2->librosa) (1.16.0)\r\nRequirement already satisfied: threadpoolctl>=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from scikit-learn>=0.19.1->librosa) (3.1.0)\r\nRequirement already satisfied: cffi>=1.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from soundfile>=0.10.2->librosa) (1.15.0)\r\nRequirement already satisfied: pycparser in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from cffi>=1.0->soundfile>=0.10.2->librosa) (2.21)\r\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2.0.12)\r\nRequirement already satisfied: certifi>=2017.4.17 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (2021.10.8)\r\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (1.26.9)\r\nRequirement already satisfied: idna<4,>=2.5 in /home/neo/.virtualenvs/hubert/lib/python3.7/site-packages (from requests>=2.19.0->pooch>=1.0->librosa) (3.3)\r\n\r\n### try loading again\r\n```\r\npython3 -c \"from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])\"\r\n```\r\n\r\n### same error\r\nDownloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e...\r\nAMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. \r\n100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 33542.59it/s]\r\n100%|█████████████████████████████████████████████████████████| 136/136 [00:06<00:00, 22.28it/s]\r\n100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 21558.39it/s]\r\n100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2996.41it/s]\r\n100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 23431.87it/s]\r\n100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2697.52it/s]\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py\", line 1707, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 595, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py\", line 690, in _download_and_prepare\r\n ) from None\r\nOSError: Cannot find data file. \r\nOriginal error:\r\nsndfile library not found\r\n", "Hi @i-am-neo, thanks again for your detailed report.\r\n\r\nOur `datasets` library support for audio relies on a third-party Python library called `librosa`, which is installed when you do:\r\n```shell\r\npip install datasets[audio]\r\n```\r\n\r\nHowever, the `librosa` library has a dependency on `soundfile`; and `soundfile` depends on a non-Python package called `sndfile`. \r\n\r\nOn Linux (which is your case), this must be installed manually using your operating system package manager, for example:\r\n```shell\r\nsudo apt-get install libsndfile1\r\n```\r\n\r\nPlease, let me know if this works and if so, I will update our docs with all this information.", "@albertvillanova thanks, all good. The key is ```libsndfile1``` - it may help others to note that in your docs. I had installed libsndfile previously." ]
"2022-03-24T01:52:32"
"2022-03-25T17:53:33"
"2022-03-25T17:53:33"
NONE
null
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e... AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. 100%|██████████████████████████████████████████████████████| 136/136 [00:00<00:00, 36004.88it/s] 100%|█████████████████████████████████████████████████████████| 136/136 [00:01<00:00, 79.10it/s] 100%|████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 25343.23it/s] 100%|█████████████████████████████████████████████████████████| 18/18 [00:00<00:00, 2874.78it/s] 100%|████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 27950.38it/s] 100%|█████████████████████████████████████████████████████████| 16/16 [00:00<00:00, 2892.25it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4000/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3999/comments
https://api.github.com/repos/huggingface/datasets/issues/3999/events
https://github.com/huggingface/datasets/pull/3999
1,178,685,280
PR_kwDODunzps406WN_
3,999
Docs maintenance
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
"2022-03-23T21:27:33"
"2022-03-30T17:01:45"
"2022-03-30T16:56:38"
MEMBER
null
This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3999/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3999", "html_url": "https://github.com/huggingface/datasets/pull/3999", "diff_url": "https://github.com/huggingface/datasets/pull/3999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3999.patch", "merged_at": "2022-03-30T16:56:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/3998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3998/comments
https://api.github.com/repos/huggingface/datasets/issues/3998/events
https://github.com/huggingface/datasets/pull/3998
1,178,631,986
PR_kwDODunzps406KyA
3,998
Fix Audio.encode_example() when writing an array
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-23T20:32:13"
"2022-03-29T14:21:44"
"2022-03-29T14:16:13"
CONTRIBUTOR
null
Closes #3996
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3998/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3998", "html_url": "https://github.com/huggingface/datasets/pull/3998", "diff_url": "https://github.com/huggingface/datasets/pull/3998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3998.patch", "merged_at": "2022-03-29T14:16:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/3997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3997/comments
https://api.github.com/repos/huggingface/datasets/issues/3997/events
https://github.com/huggingface/datasets/pull/3997
1,178,566,568
PR_kwDODunzps4058xr
3,997
Sync Features dictionaries
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-23T19:23:51"
"2022-04-13T15:52:27"
"2022-04-13T15:46:19"
CONTRIBUTOR
null
This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731). A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delitem__`, but this PR doesn't implement it for the following reasons: * it requires replacing all occurrences of `isinstance(obj, dict)` with `isinstance(obj, Mapping)`, which is five times slower than `isinstance(obj, dict)` on my machine, in `features.py` * is a breaking change, i.e., `isinstance(Features(...), dict)` would return `False` after it * IMO, it makes sense to be consistent in the user-facing API and subclass either `dict` or `UserDict`. The problem with the latter is that it can't be used for `DatasetDict` because `DatasetDict` exposes the `data` property, which is also used by `UserDict`, so this would result in a collision.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3997/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3997", "html_url": "https://github.com/huggingface/datasets/pull/3997", "diff_url": "https://github.com/huggingface/datasets/pull/3997.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3997.patch", "merged_at": "2022-04-13T15:46:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/3996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3996/comments
https://api.github.com/repos/huggingface/datasets/issues/3996/events
https://github.com/huggingface/datasets/issues/3996
1,178,415,905
I_kwDODunzps5GPTMh
3,996
Audio.encode_example() throws an error when writing example from array
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "Good catch ! Yes I think passing `format=\"wav\"` is the right thing to do", "Thanks @polinaeterna for reporting this issue.\r\n\r\nIn relation to the decoding of MP3 audio files without torchaudio, I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio. But yes, nice to give an alternative to non-torchaudio users (with a big warning on performance).", "> I remember Patrick made some tests and these had quite bad performance. That is why he proposed to support MP3 files only with torchaudio.\r\n\r\nYeah, I know, but as far as I understand, some users just categorically don't want to have torchaudio in their environment. Anyway, it's just a more or less random example, they can use any library they like following the same logic (I'm just not a big expert in decoding utils so if you can give me some presentation / resources about that I would really appreciate it 🤗)" ]
"2022-03-23T17:11:47"
"2022-03-29T14:16:13"
"2022-03-29T14:16:13"
CONTRIBUTOR
null
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>` ## Steps to reproduce the bug ### Sample code to reproduce the bug ```python # download sample file !wget https://huggingface.co/datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3 arr, sr = librosa.load("common_voice_vi_21824030.mp3") Audio().encode_example({ "path": "common_voice_vi_21824030.mp3", "array": arr, "sampling_rate":sr }) ``` ## Expected results An encoded example (`{"bytes": b'....', "path": 'path'}`) ## Actual results ```python TypeError Traceback (most recent call last) Input In [3], in <module> 1 arr, sr = librosa.load("common_voice_vi_21824030.mp3") ----> 3 Audio().encode_example({ 4 "path": "common_voice_vi_21824030.mp3", 5 "array": arr, 6 "sampling_rate":sr 7 }) File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value) 73 elif isinstance(value, dict) and "array" in value: 74 buffer = BytesIO() ---> 75 sf.write(buffer, value["array"], value["sampling_rate"]) 76 return {"bytes": buffer.getvalue(), "path": value.get("path")} 77 elif value.get("bytes") is not None or value.get("path") is not None: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd) 312 else: 313 channels = data.shape[1] --> 314 with SoundFile(file, 'w', samplerate, channels, 315 subtype, endian, format, closefd) as f: 316 f.write(data) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 625 mode_int = _check_mode(mode) 626 self._mode = mode --> 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian) 1414 original_format = format 1415 if format is None: -> 1416 format = _get_format_from_filename(file, mode) 1417 assert isinstance(format, (_unicode, str)) 1418 else: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode) 1455 pass 1456 if format.upper() not in _formats and 'r' not in mode: -> 1457 raise TypeError("No format specified and unable to get format from " 1458 "file extension: {0!r}".format(file)) 1459 return format TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets master - Platform: Ubuntu 20.04 - Python version: python 3.8.12 - PyArrow version: 6.0.1 ## Solution I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this: ```python sf.write(buffer, value["array"], value["sampling_rate"], format="wav") ``` BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this: ```python from datasets import load_dataset, Features, Audio ds = load_dataset("common_voice", "vi", split="test") ds = ds.remove_columns("audio") ds.select(range(3)) # 3 samples just for testing def load_mp3_with_librosa(example): arr, sr = librosa.load(example["path"]) example["audio"] = { "path": example["path"], "array": arr, "sampling_rate": sr } return example updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example), features=Features( {"audio": Audio(decode=False)} )) ``` @lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3996/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3996/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3995
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3995/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3995/comments
https://api.github.com/repos/huggingface/datasets/issues/3995/events
https://github.com/huggingface/datasets/pull/3995
1,178,232,623
PR_kwDODunzps404054
3,995
Close `PIL.Image` file handler in `Image.decode_example`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-23T14:51:48"
"2022-03-23T18:24:52"
"2022-03-23T18:19:27"
CONTRIBUTOR
null
Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error. To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926. Fix #3985
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3995/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3995", "html_url": "https://github.com/huggingface/datasets/pull/3995", "diff_url": "https://github.com/huggingface/datasets/pull/3995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3995.patch", "merged_at": "2022-03-23T18:19:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/3994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3994/comments
https://api.github.com/repos/huggingface/datasets/issues/3994/events
https://github.com/huggingface/datasets/pull/3994
1,178,211,138
PR_kwDODunzps404wWu
3,994
Change audio column from string path to Audio feature in ASR task
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-23T14:34:52"
"2022-03-23T15:43:43"
"2022-03-23T15:43:43"
CONTRIBUTOR
null
Will fix #3990
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3994/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3994", "html_url": "https://github.com/huggingface/datasets/pull/3994", "diff_url": "https://github.com/huggingface/datasets/pull/3994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3994.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3992/comments
https://api.github.com/repos/huggingface/datasets/issues/3992/events
https://github.com/huggingface/datasets/issues/3992
1,177,946,153
I_kwDODunzps5GNggp
3,992
Image column is not decoded in map when using with with_transform
{ "login": "phihung", "id": 5902432, "node_id": "MDQ6VXNlcjU5MDI0MzI=", "avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phihung", "html_url": "https://github.com/phihung", "followers_url": "https://api.github.com/users/phihung/followers", "following_url": "https://api.github.com/users/phihung/following{/other_user}", "gists_url": "https://api.github.com/users/phihung/gists{/gist_id}", "starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phihung/subscriptions", "organizations_url": "https://api.github.com/users/phihung/orgs", "repos_url": "https://api.github.com/users/phihung/repos", "events_url": "https://api.github.com/users/phihung/events{/privacy}", "received_events_url": "https://api.github.com/users/phihung/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with_transform` assign a non-`None` value to it) and the `input_columns` param is not specified (see https://github.com/huggingface/datasets/issues/3756). We will remove these limitations soon.\r\n\r\n\r\n\r\n" ]
"2022-03-23T10:51:13"
"2022-12-13T16:59:06"
"2022-12-13T16:59:06"
NONE
null
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ds = ds.with_transform(lambda x: x) # <= This line causes the problem ds = ds.map(add_C, batched=True) print(ds[0]) ``` ## Expected results ``` {'C': <PIL.PngImagePlugin.PngImageFile>, ...} ``` ## Actual results ``` {'C': {'bytes': None, 'path': 'image.png'}, ...} ``` If we remove the `with_transform` line, we get the expected result. ## Environment info - `datasets` version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3992/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3990/comments
https://api.github.com/repos/huggingface/datasets/issues/3990/events
https://github.com/huggingface/datasets/issues/3990
1,176,976,247
I_kwDODunzps5GJzt3
3,990
Improve AutomaticSpeechRecognition task template
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "There is an open PR to do that: #3364. I just haven't had time to finish it... ", "> There is an open PR to do that: #3364. I just haven't had time to finish it...\r\n\r\n😬 thanks..." ]
"2022-03-22T15:41:08"
"2022-03-23T17:12:40"
"2022-03-23T17:12:40"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created). **Describe the solution you'd like** Change audio columns from string path to Audio feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3990/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3989/comments
https://api.github.com/repos/huggingface/datasets/issues/3989/events
https://github.com/huggingface/datasets/pull/3989
1,176,955,078
PR_kwDODunzps400l1S
3,989
Remove old wikipedia leftovers
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-22T15:25:46"
"2022-03-31T15:35:26"
"2022-03-31T15:30:16"
MEMBER
null
After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3989/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3989", "html_url": "https://github.com/huggingface/datasets/pull/3989", "diff_url": "https://github.com/huggingface/datasets/pull/3989.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3989.patch", "merged_at": "2022-03-31T15:30:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/3988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3988/comments
https://api.github.com/repos/huggingface/datasets/issues/3988/events
https://github.com/huggingface/datasets/pull/3988
1,176,858,540
PR_kwDODunzps400RGb
3,988
More consistent references in docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-22T14:18:41"
"2022-03-22T17:06:32"
"2022-03-22T16:50:44"
CONTRIBUTOR
null
Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980. cc @stevhliu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3988/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3988/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3988", "html_url": "https://github.com/huggingface/datasets/pull/3988", "diff_url": "https://github.com/huggingface/datasets/pull/3988.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3988.patch", "merged_at": "2022-03-22T16:50:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/3987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3987/comments
https://api.github.com/repos/huggingface/datasets/issues/3987/events
https://github.com/huggingface/datasets/pull/3987
1,176,481,659
PR_kwDODunzps40zAxF
3,987
Fix Faiss custom_index device
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-22T09:11:24"
"2022-03-24T12:18:59"
"2022-03-24T12:14:12"
MEMBER
null
Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored. This PR fixes this by raising a ValueError if both arguments are passed. Alternatively, the `custom_index` could be transferred to the target `device`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3987/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3987", "html_url": "https://github.com/huggingface/datasets/pull/3987", "diff_url": "https://github.com/huggingface/datasets/pull/3987.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3987.patch", "merged_at": "2022-03-24T12:14:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/3985
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3985/comments
https://api.github.com/repos/huggingface/datasets/issues/3985/events
https://github.com/huggingface/datasets/issues/3985
1,175,982,937
I_kwDODunzps5GGBNZ
3,985
[image feature] Too many files open error when image feature is returned as a path
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
"2022-03-21T21:54:05"
"2022-03-23T18:19:27"
"2022-03-23T18:19:27"
MEMBER
null
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue. ## Steps to reproduce the bug Pull the PR locally and run the following code ```python from datasets import load_dataset dataset = load_dataset("./datasets/textvqa")["train"] data = [item for item in dataset] # Error happens ``` ## Expected results List comprehension should work smoothly ## Actual results `Too many open files error` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.10.0 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3985/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3983/comments
https://api.github.com/repos/huggingface/datasets/issues/3983/events
https://github.com/huggingface/datasets/issues/3983
1,175,759,412
I_kwDODunzps5GFKo0
3,983
Infinitely attempting lock
{ "login": "jyrr", "id": 11869652, "node_id": "MDQ6VXNlcjExODY5NjUy", "avatar_url": "https://avatars.githubusercontent.com/u/11869652?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jyrr", "html_url": "https://github.com/jyrr", "followers_url": "https://api.github.com/users/jyrr/followers", "following_url": "https://api.github.com/users/jyrr/following{/other_user}", "gists_url": "https://api.github.com/users/jyrr/gists{/gist_id}", "starred_url": "https://api.github.com/users/jyrr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jyrr/subscriptions", "organizations_url": "https://api.github.com/users/jyrr/orgs", "repos_url": "https://api.github.com/users/jyrr/repos", "events_url": "https://api.github.com/users/jyrr/events{/privacy}", "received_events_url": "https://api.github.com/users/jyrr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting. We're using `py-filelock` as our locking mechanism.\r\n\r\nCan you try deleting the .lock file mentioned in the logs and try again ? Make sure that no other process is generating the `cnn_dailymail` dataset.\r\n\r\nIf it doesn't work, could you try to set up a lock using the latest version of `py-filelock` and see if it works ?\r\n\r\n```\r\npip install filelock\r\n```\r\nhere is a code example from the `py-filelock` documentation that you can try:\r\n```python\r\nfrom filelock import Timeout, FileLock\r\n\r\nlock = FileLock(\"high_ground.txt.lock\")\r\nwith lock:\r\n with open(\"high_ground.txt\", \"a\") as f:\r\n f.write(\"You were the chosen one.\")\r\n```" ]
"2022-03-21T18:11:57"
"2022-05-06T16:12:18"
"2022-05-06T16:12:18"
NONE
null
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /dbfs/transformers/tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --log_level debug \ --cache_dir /dbfs/transformers/cache ``` All goes well until acquiring a lock -- ``` 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... ``` and so on. I imagine this has to do with DBFS -- is there a way to tackle this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3983/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3982
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3982/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3982/comments
https://api.github.com/repos/huggingface/datasets/issues/3982/events
https://github.com/huggingface/datasets/pull/3982
1,175,478,099
PR_kwDODunzps40vrR_
3,982
Exclude Google Drive tests of the CI
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-21T14:34:16"
"2022-03-31T16:38:02"
"2022-03-21T14:51:35"
MEMBER
null
These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often. I think we can just skip these tests from the CI for now. In the future we could have a CI job that runs only once a day or once a week for such cases cc @albertvillanova @mariosasko @severo Close #3415 ![image](https://user-images.githubusercontent.com/42851186/159283608-fdeca1ac-b57f-4fa3-bf09-6fa5361c494f.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3982/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3982/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3982", "html_url": "https://github.com/huggingface/datasets/pull/3982", "diff_url": "https://github.com/huggingface/datasets/pull/3982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3982.patch", "merged_at": "2022-03-21T14:51:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/3981
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3981/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3981/comments
https://api.github.com/repos/huggingface/datasets/issues/3981/events
https://github.com/huggingface/datasets/pull/3981
1,175,423,517
PR_kwDODunzps40vfra
3,981
Add TER metric card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-21T13:54:36"
"2022-03-29T13:57:11"
"2022-03-29T13:51:40"
CONTRIBUTOR
null
Add TER metric card This card is still missing content for the following sections: - **Limitations & Biases** - **Values from Papers** If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3981/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3981", "html_url": "https://github.com/huggingface/datasets/pull/3981", "diff_url": "https://github.com/huggingface/datasets/pull/3981.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3981.patch", "merged_at": "2022-03-29T13:51:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/3980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3980/comments
https://api.github.com/repos/huggingface/datasets/issues/3980/events
https://github.com/huggingface/datasets/pull/3980
1,175,412,905
PR_kwDODunzps40vdcH
3,980
Add tip on how to speed up loading with ImageFolder
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-21T13:45:58"
"2022-03-22T13:39:45"
"2022-03-22T13:34:56"
CONTRIBUTOR
null
This PR does two things: * adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960)) * replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (to align it with the Audio Processing doc) cc @stevhliu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3980/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3980", "html_url": "https://github.com/huggingface/datasets/pull/3980", "diff_url": "https://github.com/huggingface/datasets/pull/3980.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3980.patch", "merged_at": "2022-03-22T13:34:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/3979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3979/comments
https://api.github.com/repos/huggingface/datasets/issues/3979/events
https://github.com/huggingface/datasets/pull/3979
1,175,258,969
PR_kwDODunzps40u8NY
3,979
Fix google drive streaming for small files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-21T11:38:46"
"2022-03-24T16:59:11"
"2022-03-21T14:25:58"
MEMBER
null
Google drive did another change recently, following #3787 #3843 . In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3979/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3979", "html_url": "https://github.com/huggingface/datasets/pull/3979", "diff_url": "https://github.com/huggingface/datasets/pull/3979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3979.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3977/comments
https://api.github.com/repos/huggingface/datasets/issues/3977/events
https://github.com/huggingface/datasets/issues/3977
1,175,049,927
I_kwDODunzps5GCdbH
3,977
Adapt `docs/README.md` for datasets
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "Thanks for reporting @qqaatw.\r\n\r\nYes, we should definitely adapt that file for `datasets`. " ]
"2022-03-21T08:26:49"
"2023-02-27T10:32:37"
"2023-02-27T10:32:37"
CONTRIBUTOR
null
## Describe the bug Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3977/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3977/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3976/comments
https://api.github.com/repos/huggingface/datasets/issues/3976/events
https://github.com/huggingface/datasets/pull/3976
1,175,043,780
PR_kwDODunzps40uOY6
3,976
Fix main classes reference in docs
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-21T08:19:46"
"2022-04-12T14:19:39"
"2022-04-12T14:19:38"
CONTRIBUTOR
null
Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block. There are other examples in datasets library having this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3976/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3976", "html_url": "https://github.com/huggingface/datasets/pull/3976", "diff_url": "https://github.com/huggingface/datasets/pull/3976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3976.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3975
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3975/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3975/comments
https://api.github.com/repos/huggingface/datasets/issues/3975/events
https://github.com/huggingface/datasets/pull/3975
1,174,678,942
PR_kwDODunzps40tKdS
3,975
Update many missing tags to dataset README's
{ "login": "MarkusSagen", "id": 20767068, "node_id": "MDQ6VXNlcjIwNzY3MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/20767068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MarkusSagen", "html_url": "https://github.com/MarkusSagen", "followers_url": "https://api.github.com/users/MarkusSagen/followers", "following_url": "https://api.github.com/users/MarkusSagen/following{/other_user}", "gists_url": "https://api.github.com/users/MarkusSagen/gists{/gist_id}", "starred_url": "https://api.github.com/users/MarkusSagen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MarkusSagen/subscriptions", "organizations_url": "https://api.github.com/users/MarkusSagen/orgs", "repos_url": "https://api.github.com/users/MarkusSagen/repos", "events_url": "https://api.github.com/users/MarkusSagen/events{/privacy}", "received_events_url": "https://api.github.com/users/MarkusSagen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-20T20:42:27"
"2022-03-21T18:39:52"
"2022-03-21T18:39:52"
NONE
null
I've started to go through the datasets available and noticed that there are 127 datasets that does not have all the tags so I started filling them in; starting with some of the most common and QA datasets Not 100% certain that the task_id is correct for SuperGLUE If anyone is browsing the issues and would like to help make Hugging face datasets even more feature complete and awesome, feel free to use this tool I wrote to find the missing tags in the [datacards](https://github.com/Hugging-Face-Supporter/datacards)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3975/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3975", "html_url": "https://github.com/huggingface/datasets/pull/3975", "diff_url": "https://github.com/huggingface/datasets/pull/3975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3975.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3974/comments
https://api.github.com/repos/huggingface/datasets/issues/3974/events
https://github.com/huggingface/datasets/pull/3974
1,174,485,044
PR_kwDODunzps40ssrA
3,974
Add XFUN dataset
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
null
[]
null
[]
"2022-03-20T09:24:54"
"2022-10-03T09:38:16"
"2022-10-03T09:36:22"
CONTRIBUTOR
null
This PR adds XFUN dataset. Home page and repository: https://github.com/doc-analysis/XFUND Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3974/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3974", "html_url": "https://github.com/huggingface/datasets/pull/3974", "diff_url": "https://github.com/huggingface/datasets/pull/3974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3974.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3973
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3973/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3973/comments
https://api.github.com/repos/huggingface/datasets/issues/3973/events
https://github.com/huggingface/datasets/issues/3973
1,174,455,431
I_kwDODunzps5GAMSH
3,973
ConnectionError and SSLError
{ "login": "yanyu2015", "id": 11142054, "node_id": "MDQ6VXNlcjExMTQyMDU0", "avatar_url": "https://avatars.githubusercontent.com/u/11142054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanyu2015", "html_url": "https://github.com/yanyu2015", "followers_url": "https://api.github.com/users/yanyu2015/followers", "following_url": "https://api.github.com/users/yanyu2015/following{/other_user}", "gists_url": "https://api.github.com/users/yanyu2015/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanyu2015/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanyu2015/subscriptions", "organizations_url": "https://api.github.com/users/yanyu2015/orgs", "repos_url": "https://api.github.com/users/yanyu2015/repos", "events_url": "https://api.github.com/users/yanyu2015/events{/privacy}", "received_events_url": "https://api.github.com/users/yanyu2015/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! You can download the `oscar.py` file from this repository at `/datasets/oscar/oscar.py`.\r\n\r\nThen you can load the dataset by passing the local path to `oscar.py` to `load_dataset`:\r\n```python\r\nload_dataset(\"path/to/oscar.py\", \"unshuffled_deduplicated_it\")\r\n```", "it works,but another error occurs.\r\n```\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (SSLError(MaxRetryError(\"HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))\")))\r\n```\r\nI can access `https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/it/it_sha256.txt` and `https://aws.amazon.com/cn/s3/` directly, so why it reports a SSLError, should I need tomodify the host file?", "Could it be an issue with your python environment or your version of OpenSSL ?", "you are so wise!\r\nit report [ConnectionError] in python 3.9.7\r\nand works well in python 3.8.12\r\n\r\nI need you help again: how can I specify the path for download files?\r\nthe data is too large and my C hardware is not enough", "Cool ! And you can specify the path for download files with to the `cache_dir` parameter:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('oscar', 'unshuffled_deduplicated_it', cache_dir='path/to/directory')", "It takes me some days to download data completely, Despise sometimes it occurs again, change py version is feasible way to avoid this ConnectionEror.\r\nparameter `cache_dir` works well, thanks for your kindness again!" ]
"2022-03-20T06:45:37"
"2022-03-30T08:13:32"
"2022-03-30T08:13:32"
NONE
null
code ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') ``` bug report ``` --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module> ----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1658 1659 # Create a dataset builder -> 1660 builder_instance = load_dataset_builder( 1661 path=path, 1662 name=name, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1484 download_config = download_config.copy() if download_config else DownloadConfig() 1485 download_config.use_auth_token = use_auth_token -> 1486 dataset_module = dataset_module_factory( 1487 path, 1488 revision=revision, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1237 ) from None -> 1238 raise e1 from None 1239 else: 1240 raise FileNotFoundError( D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now 1174 # TODO(QL): use a Hub dataset module factory instead of GitHub -> 1175 return GithubDatasetModuleFactory( 1176 path, 1177 revision=revision, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self) 531 revision = self.revision 532 try: --> 533 local_path = self.download_loading_script(revision) 534 except FileNotFoundError: 535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None: D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision) 511 if download_config.download_desc is None: 512 download_config.download_desc = "Downloading builder script" --> 513 return cached_path(file_path, download_config=download_config) 514 515 def download_dataset_infos_file(self, revision: Optional[str]) -> str: D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 232 if is_remote_url(url_or_filename): 233 # URL, so get it from the cache (downloading if necessary) --> 234 output_path = get_from_cache( 235 url_or_filename, 236 cache_dir=cache_dir, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 581 if head_error is not None: --> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") 583 elif response is not None: 584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))"))) ``` It may be caused by Caused by SSLError(in China?) because it works well on google colab. So how can I download this dataset manually?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3973/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3972/comments
https://api.github.com/repos/huggingface/datasets/issues/3972/events
https://github.com/huggingface/datasets/pull/3972
1,174,402,033
PR_kwDODunzps40sdVu
3,972
Adding Roman Urdu Hate Speech dataset
{ "login": "bp-high", "id": 53102161, "node_id": "MDQ6VXNlcjUzMTAyMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/53102161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bp-high", "html_url": "https://github.com/bp-high", "followers_url": "https://api.github.com/users/bp-high/followers", "following_url": "https://api.github.com/users/bp-high/following{/other_user}", "gists_url": "https://api.github.com/users/bp-high/gists{/gist_id}", "starred_url": "https://api.github.com/users/bp-high/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bp-high/subscriptions", "organizations_url": "https://api.github.com/users/bp-high/orgs", "repos_url": "https://api.github.com/users/bp-high/repos", "events_url": "https://api.github.com/users/bp-high/events{/privacy}", "received_events_url": "https://api.github.com/users/bp-high/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-20T00:19:26"
"2022-03-25T15:56:19"
"2022-03-25T15:51:20"
CONTRIBUTOR
null
This Pull request will add the Roman Urdu Hate speech Dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3972/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3972", "html_url": "https://github.com/huggingface/datasets/pull/3972", "diff_url": "https://github.com/huggingface/datasets/pull/3972.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3972.patch", "merged_at": "2022-03-25T15:51:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/3971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3971/comments
https://api.github.com/repos/huggingface/datasets/issues/3971/events
https://github.com/huggingface/datasets/pull/3971
1,174,329,442
PR_kwDODunzps40sS4W
3,971
Applied index-filters on scores in search.py.
{ "login": "vishalsrao", "id": 36671559, "node_id": "MDQ6VXNlcjM2NjcxNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsrao", "html_url": "https://github.com/vishalsrao", "followers_url": "https://api.github.com/users/vishalsrao/followers", "following_url": "https://api.github.com/users/vishalsrao/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions", "organizations_url": "https://api.github.com/users/vishalsrao/orgs", "repos_url": "https://api.github.com/users/vishalsrao/repos", "events_url": "https://api.github.com/users/vishalsrao/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsrao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-19T18:43:42"
"2022-04-12T14:48:23"
"2022-04-12T14:41:58"
CONTRIBUTOR
null
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3971/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3971", "html_url": "https://github.com/huggingface/datasets/pull/3971", "diff_url": "https://github.com/huggingface/datasets/pull/3971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3971.patch", "merged_at": "2022-04-12T14:41:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/3970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3970/comments
https://api.github.com/repos/huggingface/datasets/issues/3970/events
https://github.com/huggingface/datasets/pull/3970
1,174,327,367
PR_kwDODunzps40sSfx
3,970
Apply index-filters on scores in get_nearest_examples and get_nearest…
{ "login": "vishalsrao", "id": 36671559, "node_id": "MDQ6VXNlcjM2NjcxNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsrao", "html_url": "https://github.com/vishalsrao", "followers_url": "https://api.github.com/users/vishalsrao/followers", "following_url": "https://api.github.com/users/vishalsrao/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions", "organizations_url": "https://api.github.com/users/vishalsrao/orgs", "repos_url": "https://api.github.com/users/vishalsrao/repos", "events_url": "https://api.github.com/users/vishalsrao/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsrao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-19T18:32:31"
"2022-03-19T18:38:12"
"2022-03-19T18:38:12"
CONTRIBUTOR
null
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3970/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3970", "html_url": "https://github.com/huggingface/datasets/pull/3970", "diff_url": "https://github.com/huggingface/datasets/pull/3970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3970.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3969/comments
https://api.github.com/repos/huggingface/datasets/issues/3969/events
https://github.com/huggingface/datasets/issues/3969
1,174,273,824
I_kwDODunzps5F_f8g
3,969
Cannot preview cnn_dailymail dataset
{ "login": "hasan-besh", "id": 75482871, "node_id": "MDQ6VXNlcjc1NDgyODcx", "avatar_url": "https://avatars.githubusercontent.com/u/75482871?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hasan-besh", "html_url": "https://github.com/hasan-besh", "followers_url": "https://api.github.com/users/hasan-besh/followers", "following_url": "https://api.github.com/users/hasan-besh/following{/other_user}", "gists_url": "https://api.github.com/users/hasan-besh/gists{/gist_id}", "starred_url": "https://api.github.com/users/hasan-besh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hasan-besh/subscriptions", "organizations_url": "https://api.github.com/users/hasan-besh/orgs", "repos_url": "https://api.github.com/users/hasan-besh/repos", "events_url": "https://api.github.com/users/hasan-besh/events{/privacy}", "received_events_url": "https://api.github.com/users/hasan-besh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I guess the cache got corrupted due to a previous issue with Google Drive service.\r\n\r\nThe cache should be regenerated, e.g. by passing `download_mode=\"force_redownload\"`.\r\n\r\nCC: @severo ", "Note that the dataset preview uses its own cache, not `datasets`' cache. So `download_mode=\"force_redownload\"` doesn't help. But yes indeed the cache must be refreshed.\r\n\r\nThe CNN Dailymail dataste is currently hosted on Google Drive, which is an unreliable host and we've had many issues with it. Unless we found another most reliable host for the data, we will keep running into issues from time to time.\r\n\r\nAt Hugging Face we're not allowed to host the CNN Dailymail data by ourselves AFAIK", "Yes @lhoestq, I didn't explain myself well: my previous message was addressed to @severo. ", "I remove the tag dataset-viewer, since it's more an issue with the hosting on Google Drive", "Sounds good. I was looking for another host of this dataset but couldn't find any (yet)", "It seems like the issue is with the streaming mode, not with the hosting:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=True, download_mode=\"force_redownload\")\r\nDownloading builder script: 9.35kB [00:00, 10.2MB/s]\r\nDownloading metadata: 9.50kB [00:00, 12.2MB/s]\r\n>>> len(list(dataset))\r\n0\r\n>>> dataset = datasets.load_dataset('cnn_dailymail', name=\"3.0.0\", split=\"train\", streaming=False)\r\nReusing dataset cnn_dailymail (/home/slesage/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234)\r\n>>> len(dataset)\r\n287113\r\n```\r\n\r\nNote, in particular, that the streaming mode is failing silently, returning 0 row while I would have expected an exception instead. The result is that the dataset viewer shows `No data` instead of a detailed error.\r\n\r\n<img width=\"1511\" alt=\"Capture d’écran 2022-04-12 à 11 50 46\" src=\"https://user-images.githubusercontent.com/1676121/162935341-d50f1e73-d053-41d4-917f-e79708a0ca23.png\">\r\n", "Well this is because the host (Google Drive) returns a document that is not the actual data, but an error page", "Do you think that `datasets` should detect this anyway and throw an exception?", "Yes it definitely should ! I don't have the bandwidth to work on this right now though", "Indeed, streaming was not supported: tgz archives were not properly iterated.\r\n\r\nI've opened a PR to support streaming.\r\n\r\nHowever, keep in mind that Google Drive will keep generating issues from time to time, like 403,..." ]
"2022-03-19T14:08:57"
"2022-04-20T15:52:49"
"2022-04-20T15:52:49"
NONE
null
## Dataset viewer issue for '*cnn_dailymail*' **Link:** https://huggingface.co/datasets/cnn_dailymail *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3969/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3968/comments
https://api.github.com/repos/huggingface/datasets/issues/3968/events
https://github.com/huggingface/datasets/issues/3968
1,174,193,962
I_kwDODunzps5F_Mcq
3,968
Cannot preview 'indonesian-nlp/eli5_id' dataset
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @cahya-wirawan, thanks for reporting.\r\n\r\nYour dataset is working OK in streaming mode:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"indonesian-nlp/eli5_id\", split=\"train\", streaming=True)\r\n ...: item = next(iter(ds))\r\n ...: item\r\nUsing custom data configuration indonesian-nlp--eli5_id-9fe728a7e760fb7b\r\n\r\nOut[1]: \r\n{'q_id': '1oy5tc',\r\n 'title': 'dalam sepak bola apa gunanya menyia-nyiakan dua permainan pertama dengan terburu-buru - di tengah - bukan permainan terburu-buru biasa saya mendapatkannya',\r\n 'selftext': '',\r\n 'document': '',\r\n 'subreddit': 'explainlikeimfive',\r\n 'answers': {'a_id': ['ccwtgnz', 'ccwtmho', 'ccwt946', 'ccwvj0u'],\r\n 'text': ['Jaga pertahanan tetap jujur, rasakan operan terburu-buru, buka permainan yang lewat. Pelanggaran yang terlalu satu dimensi akan gagal. Dan mereka yang bergegas ke tengah kadang-kadang dapat dibuka lebar-lebar untuk ukuran yard yang besar.',\r\n 'Jika Anda melempar bola sepanjang waktu, maka pertahanan akan beradaptasi untuk selalu menutupi umpan. Dengan melakukan permainan lari sederhana sesekali, Anda memaksa pertahanan untuk tetap dekat dan menjaga dari lari. Terkadang, pelanggaran dapat membuat pertahanan lengah dengan berpura-pura berlari dan membebaskan penerima mereka. Selain itu, Anda tidak perlu mendapatkan yard besar di setiap permainan. Terkadang, paling baik mendapatkan beberapa yard sekaligus. Selama Anda mendapatkan yang pertama, Anda dalam kondisi yang baik.',\r\n 'Dalam kebanyakan kasus, O-Line seharusnya membuat lubang untuk dilalui kembali. Jika Anda menjalankan terlalu banyak permainan ke luar / melempar, pertahanan akan mengejar. Juga, 2 permainan 5 yard memberi Anda satu set down baru.',\r\n 'Saya Anda tidak suka jenis drama itu, tonton CFL. Kami hanya mendapatkan 3 down sehingga Anda tidak bisa menyia-nyiakannya. Lebih banyak lagi yang lewat.'],\r\n 'score': [3, 2, 2, 2]},\r\n 'title_urls': {'url': []},\r\n 'selftext_urls': {'url': []},\r\n 'answers_urls': {'url': []}}\r\n```\r\nTherefore, it should be properly rendered in the previewer. Let me ping @severo to have a look at it.", "Thanks @albertvillanova for checking it. Btw, I have another dataset indonesian-nlp/lfqa_id which has the same issue. However, this dataset is still private, is it the reason why the preview doesn't work?", "Yes, preview is not supported on private datasets yet. We are working on that though...", "Thanks for the confirmation ", "Fixed. Thanks for your feedback." ]
"2022-03-19T06:54:09"
"2022-03-24T16:34:24"
"2022-03-24T16:34:24"
CONTRIBUTOR
null
## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3968/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3967/comments
https://api.github.com/repos/huggingface/datasets/issues/3967/events
https://github.com/huggingface/datasets/pull/3967
1,174,107,128
PR_kwDODunzps40rpny
3,967
[feat] Add TextVQA dataset
{ "login": "apsdehal", "id": 3616806, "node_id": "MDQ6VXNlcjM2MTY4MDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apsdehal", "html_url": "https://github.com/apsdehal", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "repos_url": "https://api.github.com/users/apsdehal/repos", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-18T23:29:39"
"2022-05-05T06:51:31"
"2022-05-05T06:44:29"
MEMBER
null
This would be the first classification-based vision-and-language dataset in the datasets library. Currently, the dataset downloads everything you need beforehand. See the [paper](https://arxiv.org/abs/1904.08920) for more details. Test Plan: - Ran the full and the dummy data test locally
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3967/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3967", "html_url": "https://github.com/huggingface/datasets/pull/3967", "diff_url": "https://github.com/huggingface/datasets/pull/3967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3967.patch", "merged_at": "2022-05-05T06:44:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/3966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3966/comments
https://api.github.com/repos/huggingface/datasets/issues/3966/events
https://github.com/huggingface/datasets/pull/3966
1,173,883,084
PR_kwDODunzps40rBNE
3,966
Create metric card for BERTScore
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-18T18:21:56"
"2022-03-22T13:35:28"
"2022-03-22T13:30:56"
CONTRIBUTOR
null
Proposing a metric card for BERTScore
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3966/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3966", "html_url": "https://github.com/huggingface/datasets/pull/3966", "diff_url": "https://github.com/huggingface/datasets/pull/3966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3966.patch", "merged_at": "2022-03-22T13:30:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/3965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3965/comments
https://api.github.com/repos/huggingface/datasets/issues/3965/events
https://github.com/huggingface/datasets/issues/3965
1,173,708,739
I_kwDODunzps5F9V_D
3,965
TypeError: Couldn't cast array of type for JSONLines dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)." ]
"2022-03-18T15:17:53"
"2022-05-06T16:13:51"
"2022-05-06T16:13:51"
MEMBER
null
## Describe the bug One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below). This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again. ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl' data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset") # throws TypeError: Couldn't cast array of type dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas - note this take a while as the file is >2GB df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to pandas. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split writer.write_table(table) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast return cast_table_to_features(table, Features.from_arrow_schema(schema)) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]> to null ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3965/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3965/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3964/comments
https://api.github.com/repos/huggingface/datasets/issues/3964/events
https://github.com/huggingface/datasets/issues/3964
1,173,564,993
I_kwDODunzps5F8y5B
3,964
Add default Audio Loader
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-18T12:58:55"
"2022-08-22T14:20:46"
"2022-08-22T14:20:46"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Writing a custom loading dataset script might be a bit challenging for users. **Describe the solution you'd like** Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure. **Describe alternatives you've considered** Create a custom loading script? that's what users doing now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3964/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3963/comments
https://api.github.com/repos/huggingface/datasets/issues/3963/events
https://github.com/huggingface/datasets/pull/3963
1,173,492,562
PR_kwDODunzps40puyZ
3,963
Add Audio Folder
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[]
"2022-03-18T11:40:09"
"2022-06-15T16:33:19"
"2022-06-15T16:33:19"
CONTRIBUTOR
null
Would resolve #3964 AudioFolder loads a .txt file with transcriptions and creates a dataset with all audiofiles in provided directory that has a transcription (independently of the directory structure) as a single split (train). Can be loaded via: ```python # for local dirs dataset = load_dataset("audiofolder", data_dir="/path/to/folder", transcripts_filename="transcripts.txt") ``` ```python # for local and remote zip archives dataset = load_dataset("audiofolder", data_files="path/to/archive/archive.zip", transcripts_filename="transcripts.txt") ``` default transcriptions filename is `transcripts.txt`. it should have the following structure: ``` audio_id_1 transcription text 1 audio_id_1 transcription text 1 ``` separator is `\t`! --- sorry for first old commits from other branch, don't know how that happened...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3963/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3963/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3963", "html_url": "https://github.com/huggingface/datasets/pull/3963", "diff_url": "https://github.com/huggingface/datasets/pull/3963.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3963.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3962/comments
https://api.github.com/repos/huggingface/datasets/issues/3962/events
https://github.com/huggingface/datasets/pull/3962
1,173,482,291
PR_kwDODunzps40psq2
3,962
Fix flatten of Sequence feature type
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-18T11:27:42"
"2022-03-21T14:40:47"
"2022-03-21T14:36:12"
MEMBER
null
The `Sequence` features type is not correctly flattened if it contains a dictionary. This PR fixes this, and I added a test case for this. Close https://github.com/huggingface/datasets/issues/3795
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3962/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3962", "html_url": "https://github.com/huggingface/datasets/pull/3962", "diff_url": "https://github.com/huggingface/datasets/pull/3962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3962.patch", "merged_at": "2022-03-21T14:36:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/3961
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3961/comments
https://api.github.com/repos/huggingface/datasets/issues/3961/events
https://github.com/huggingface/datasets/issues/3961
1,173,223,086
I_kwDODunzps5F7fau
3,961
Scores from Index at extra positions are not filtered out
{ "login": "vishalsrao", "id": 36671559, "node_id": "MDQ6VXNlcjM2NjcxNTU5", "avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vishalsrao", "html_url": "https://github.com/vishalsrao", "followers_url": "https://api.github.com/users/vishalsrao/followers", "following_url": "https://api.github.com/users/vishalsrao/following{/other_user}", "gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions", "organizations_url": "https://api.github.com/users/vishalsrao/orgs", "repos_url": "https://api.github.com/users/vishalsrao/repos", "events_url": "https://api.github.com/users/vishalsrao/events{/privacy}", "received_events_url": "https://api.github.com/users/vishalsrao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! Yes, that makes sense! Would you like to submit a PR to fix this?", "Created PR https://github.com/huggingface/datasets/pull/3971" ]
"2022-03-18T06:13:23"
"2022-04-12T14:41:58"
"2022-04-12T14:41:58"
CONTRIBUTOR
null
If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too. Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3961/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3959
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3959/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3959/comments
https://api.github.com/repos/huggingface/datasets/issues/3959/events
https://github.com/huggingface/datasets/issues/3959
1,172,872,695
I_kwDODunzps5F6J33
3,959
Medium-sized dataset conversion from pandas causes a crash
{ "login": "Antymon", "id": 641005, "node_id": "MDQ6VXNlcjY0MTAwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/641005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Antymon", "html_url": "https://github.com/Antymon", "followers_url": "https://api.github.com/users/Antymon/followers", "following_url": "https://api.github.com/users/Antymon/following{/other_user}", "gists_url": "https://api.github.com/users/Antymon/gists{/gist_id}", "starred_url": "https://api.github.com/users/Antymon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Antymon/subscriptions", "organizations_url": "https://api.github.com/users/Antymon/orgs", "repos_url": "https://api.github.com/users/Antymon/repos", "events_url": "https://api.github.com/users/Antymon/events{/privacy}", "received_events_url": "https://api.github.com/users/Antymon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It looks like an issue with pyarrow, could you try updating pyarrow and try again ?", "@albertvillanova did you find a solution to this?", "I´m getting the same problem with some files, @albertvillanova did you find a solution to this?" ]
"2022-03-17T20:20:35"
"2022-12-12T17:14:06"
"2022-04-20T12:35:37"
NONE
null
Hi, I am suffering from the following issue: ## Describe the bug Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash: ``` File "/home/datasets_crash.py", line 7, in <module> arrow=datasets.Dataset.from_pandas(d) File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas table = InMemoryTable.from_pandas( File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas return cls(pa.Table.from_pandas(*args, **kwargs)) File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458) ``` ## Steps to reproduce the bug I have a dataset made from replicated single example mocking a dict representation of a publication. I copy over this example 140k times and create a pandas frame. I use 'Dataset.from_pandas' and boom ```python # Sample code to reproduce the bug import copy import datasets import pandas # serialized dict is quite long to be realistic representation of a publication content paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}") d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100)) arrow=datasets.Dataset.from_pandas(d) ``` ## Expected results The dataset should be converted without error. ## Actual results Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.18.4 pandas==1.3.5 - Platform: macOS 11.6 or CentOS Linux 7 (Core) - Python version: Python 3.9.7 - PyArrow version: pyarrow==3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3959/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3958/comments
https://api.github.com/repos/huggingface/datasets/issues/3958/events
https://github.com/huggingface/datasets/pull/3958
1,172,657,981
PR_kwDODunzps40nQU2
3,958
Update Wikipedia metadata
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-17T17:50:05"
"2022-03-21T12:26:48"
"2022-03-21T12:26:47"
MEMBER
null
This PR updates: - dataset card - metadata JSON
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3958/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3958", "html_url": "https://github.com/huggingface/datasets/pull/3958", "diff_url": "https://github.com/huggingface/datasets/pull/3958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3958.patch", "merged_at": "2022-03-21T12:26:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/3957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3957/comments
https://api.github.com/repos/huggingface/datasets/issues/3957/events
https://github.com/huggingface/datasets/pull/3957
1,172,401,455
PR_kwDODunzps40magW
3,957
Fix xtreme s metrics
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-17T13:39:04"
"2022-03-18T13:46:19"
"2022-03-18T13:42:16"
MEMBER
null
We in fact do need BABEL in xtreme-s
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3957/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3957", "html_url": "https://github.com/huggingface/datasets/pull/3957", "diff_url": "https://github.com/huggingface/datasets/pull/3957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3957.patch", "merged_at": "2022-03-18T13:42:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/3956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3956/comments
https://api.github.com/repos/huggingface/datasets/issues/3956/events
https://github.com/huggingface/datasets/issues/3956
1,172,272,327
I_kwDODunzps5F33TH
3,956
TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "login": "amirj", "id": 1645137, "node_id": "MDQ6VXNlcjE2NDUxMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1645137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amirj", "html_url": "https://github.com/amirj", "followers_url": "https://api.github.com/users/amirj/followers", "following_url": "https://api.github.com/users/amirj/following{/other_user}", "gists_url": "https://api.github.com/users/amirj/gists{/gist_id}", "starred_url": "https://api.github.com/users/amirj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amirj/subscriptions", "organizations_url": "https://api.github.com/users/amirj/orgs", "repos_url": "https://api.github.com/users/amirj/repos", "events_url": "https://api.github.com/users/amirj/events{/privacy}", "received_events_url": "https://api.github.com/users/amirj/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @amirj, thanks for reporting.\r\n\r\nAt first sight, your issue seems a version incompatibility between your Elasticsearch client and your Elasticsearch server.\r\n\r\nFeel free to have a look at Elasticsearch client docs: https://www.elastic.co/guide/en/elasticsearch/client/python-api/current/overview.html#_compatibility\r\n> Language clients are forward compatible; meaning that clients support communicating with greater or equal minor versions of Elasticsearch. Elasticsearch language clients are only backwards compatible with default distributions and without guarantees made.", "@albertvillanova It doesn't seem a version incompatibility between the client and server, since the following code is working:\r\n\r\n```\r\nfrom elasticsearch import Elasticsearch\r\nes_client = Elasticsearch(\"http://localhost:9200\")\r\ndataset.add_elasticsearch_index(column=\"e1\", es_client=es_client, es_index_name=\"e1_index\")\r\n```", "Hi @amirj, \r\n\r\nI really think it is a version incompatibility issue between your Elasticsearch client and server:\r\n- Your Elasticsearch server NodeConfig expects a positional argument named 'scheme'\r\n- Whereas your Elasticsearch client passes only keyword arguments: `NodeConfig(**options)`\r\n\r\nMoreover:\r\n- Looking at your stack trace, I deduce you are using Elasticsearch client **\"8\"** major version:\r\n - the Elasticsearch file \"elasticsearch/_sync/client/utils.py\" was created in version \"8.0.0a1\": https://github.com/elastic/elasticsearch-py/commit/21fa13b0f03b7b27ace9e19a1f763d40bd2e2ba4\r\n - you can check your Elasticsearch client version by running this Python code:\r\n ```python\r\n import elasticsearch\r\n print(elasticsearch.__version__)\r\n ```\r\n\r\n- However, in the *Environment info*, you informed that the major version of your Eleasticsearch cluster server is **\"7\"** (\"7.10.2-SNAPSHOT\")\r\n\r\nCould you please align the Elasticsearch client/server major versions (as pointed out in Elasticsearch docs) and check if the problem persists?", "I'm closing this issue, @amirj.\r\n\r\nFeel free to re-open it if the problem persists. \r\n\r\n", "```\r\nfrom elasticsearch import Elasticsearch\r\nes = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n```\r\n```\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-8-675c6ffe5293> in <module>\r\n 1 #es = Elasticsearch([{'host':'localhost', 'port':9200}])\r\n 2 from elasticsearch import Elasticsearch\r\n----> 3 es = Elasticsearch([{'host': 'localhost', 'port': 9200}])\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport)\r\n 310 \r\n 311 if _transport is None:\r\n--> 312 node_configs = client_node_configs(\r\n 313 hosts,\r\n 314 cloud_id=cloud_id,\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in client_node_configs(hosts, cloud_id, **kwargs)\r\n 99 else:\r\n 100 assert hosts is not None\r\n--> 101 node_configs = hosts_to_node_configs(hosts)\r\n 102 \r\n 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults.\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in hosts_to_node_configs(hosts)\r\n 142 \r\n 143 elif isinstance(host, Mapping):\r\n--> 144 node_configs.append(host_mapping_to_node_config(host))\r\n 145 else:\r\n 146 raise ValueError(\r\n\r\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\elasticsearch\\_sync\\client\\utils.py in host_mapping_to_node_config(host)\r\n 209 options[\"path_prefix\"] = options.pop(\"url_prefix\")\r\n 210 \r\n--> 211 return NodeConfig(**options) # type: ignore\r\n 212 \r\n 213 \r\n\r\nTypeError: __init__() missing 1 required positional argument: 'scheme'\r\n```", "I am facing the same issue, and version is same for the both i.e(8.1.3)", "@raj713335, thanks for reporting.\r\n\r\nPlease note that in your code example, you are not using our `datasets` library. \r\n\r\nThus, I think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py\r\n\r\n" ]
"2022-03-17T11:43:13"
"2022-05-04T16:37:10"
"2022-03-28T08:00:01"
NONE
null
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset squad = load_dataset('squad', split='validation') squad.add_elasticsearch_index("context", host="localhost", port="9200") ``` ## Expected results [Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-8fb51aa33961> in <module> 1 from datasets import load_dataset 2 squad = load_dataset('squad', split='validation') ----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200") ~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 3777 """ 3778 with self.formatted_as(type=None, columns=[column]): -> 3779 super().add_elasticsearch_index( 3780 column=column, 3781 index_name=index_name, ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 587 """ 588 index_name = index_name if index_name is not None else column --> 589 es_index = ElasticSearchIndex( 590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config 591 ) ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config) 123 from elasticsearch import Elasticsearch # noqa: F811 124 --> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}]) 126 self.es_index_name = ( 127 es_index_name ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport) 310 311 if _transport is None: --> 312 node_configs = client_node_configs( 313 hosts, 314 cloud_id=cloud_id, ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs) 99 else: 100 assert hosts is not None --> 101 node_configs = hosts_to_node_configs(hosts) 102 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults. ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts) 142 143 elif isinstance(host, Mapping): --> 144 node_configs.append(host_mapping_to_node_config(host)) 145 else: 146 raise ValueError( ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Mac - Python version: 3.8.0 - PyArrow version: 7.0.0 - ElaticSearch Info: { "name" : "byname", "cluster_name" : "elasticsearch_brew", "cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA", "version" : { "number" : "7.10.2-SNAPSHOT", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "build_date" : "2021-01-16T01:41:27.115673Z", "build_snapshot" : true, "lucene_version" : "8.7.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3956/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3956/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3955
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3955/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3955/comments
https://api.github.com/repos/huggingface/datasets/issues/3955/events
https://github.com/huggingface/datasets/pull/3955
1,172,246,647
PR_kwDODunzps40l5kG
3,955
Remove unncessary 'pylint disable' message in ReadMe
{ "login": "Datta0", "id": 39181234, "node_id": "MDQ6VXNlcjM5MTgxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/39181234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Datta0", "html_url": "https://github.com/Datta0", "followers_url": "https://api.github.com/users/Datta0/followers", "following_url": "https://api.github.com/users/Datta0/following{/other_user}", "gists_url": "https://api.github.com/users/Datta0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Datta0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Datta0/subscriptions", "organizations_url": "https://api.github.com/users/Datta0/orgs", "repos_url": "https://api.github.com/users/Datta0/repos", "events_url": "https://api.github.com/users/Datta0/events{/privacy}", "received_events_url": "https://api.github.com/users/Datta0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-17T11:16:55"
"2022-04-12T14:28:35"
"2022-04-12T14:28:35"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3955/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3955", "html_url": "https://github.com/huggingface/datasets/pull/3955", "diff_url": "https://github.com/huggingface/datasets/pull/3955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3955.patch", "merged_at": "2022-04-12T14:28:35" }
true
https://api.github.com/repos/huggingface/datasets/issues/3954
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3954/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3954/comments
https://api.github.com/repos/huggingface/datasets/issues/3954/events
https://github.com/huggingface/datasets/issues/3954
1,172,141,664
I_kwDODunzps5F3XZg
3,954
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
{ "login": "MatanBenChorin", "id": 49593805, "node_id": "MDQ6VXNlcjQ5NTkzODA1", "avatar_url": "https://avatars.githubusercontent.com/u/49593805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MatanBenChorin", "html_url": "https://github.com/MatanBenChorin", "followers_url": "https://api.github.com/users/MatanBenChorin/followers", "following_url": "https://api.github.com/users/MatanBenChorin/following{/other_user}", "gists_url": "https://api.github.com/users/MatanBenChorin/gists{/gist_id}", "starred_url": "https://api.github.com/users/MatanBenChorin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MatanBenChorin/subscriptions", "organizations_url": "https://api.github.com/users/MatanBenChorin/orgs", "repos_url": "https://api.github.com/users/MatanBenChorin/repos", "events_url": "https://api.github.com/users/MatanBenChorin/events{/privacy}", "received_events_url": "https://api.github.com/users/MatanBenChorin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @MatanBenChorin, thanks for reporting.\r\n\r\nPlease, take into account that the preview may take some time until it properly renders (we are working to reduce this time).\r\n\r\nMaybe @severo can give more details on this.", "Hi, \r\nThank you", "Thanks for reporting. We are looking at it and will give updates here.", "I imagine the dataset has been moved to https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1, which still has an issue:\r\n\r\n```\r\nServer Error\r\n\r\nStatus code: 400\r\nException: NameError\r\nMessage: name 'HebrewSquad' is not defined\r\n```", "The issue is not related to the dataset viewer but to the loading script (cc @albertvillanova @lhoestq @mariosasko)\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> hf_token = \"hf_...\" # <- required because the dataset is gated\r\n>>> d = ds.load_dataset('tdklab/Hebrew_Squad_v1', use_auth_token=hf_token)\r\n...\r\nNameError: name 'HebrewSquad' is not defined\r\n```", "Yes indeed there is an error in [Hebrew_Squad_v1.py:L40](https://huggingface.co/datasets/tdklab/Hebrew_Squad_v1/blob/main/Hebrew_Squad_v1.py#L40)\r\n\r\nHere is the fix @MatanBenChorin :\r\n\r\n```diff\r\n- HebrewSquad(\r\n+ HebrewSquadConfig(\r\n```" ]
"2022-03-17T09:38:11"
"2022-04-20T12:39:07"
"2022-04-20T12:39:07"
NONE
null
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1' **Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true The dataset preview is not available for this dataset. Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3954/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3953/comments
https://api.github.com/repos/huggingface/datasets/issues/3953/events
https://github.com/huggingface/datasets/issues/3953
1,172,123,736
I_kwDODunzps5F3TBY
3,953
Add ImageNet Sketch
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
null
[]
null
[ "Can you assign this task to me? @nreimers @mariosasko ", "Hi! Sure! Let us know if you need any pointers." ]
"2022-03-17T09:20:31"
"2022-05-23T18:05:29"
"2022-05-23T18:05:29"
CONTRIBUTOR
null
## Adding a Dataset - **Name:** ImageNet Sketch - **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale. - **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549) - **Data:** https://github.com/HaohanWang/ImageNet-Sketch - **Motivation:** Allows for evaluating the robustness of vision models. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3953/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3953/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3952
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3952/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3952/comments
https://api.github.com/repos/huggingface/datasets/issues/3952/events
https://github.com/huggingface/datasets/issues/3952
1,171,895,531
I_kwDODunzps5F2bTr
3,952
Checksum error for glue sst2, stsb, rte etc datasets
{ "login": "ravindra-ut", "id": 22090962, "node_id": "MDQ6VXNlcjIyMDkwOTYy", "avatar_url": "https://avatars.githubusercontent.com/u/22090962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ravindra-ut", "html_url": "https://github.com/ravindra-ut", "followers_url": "https://api.github.com/users/ravindra-ut/followers", "following_url": "https://api.github.com/users/ravindra-ut/following{/other_user}", "gists_url": "https://api.github.com/users/ravindra-ut/gists{/gist_id}", "starred_url": "https://api.github.com/users/ravindra-ut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravindra-ut/subscriptions", "organizations_url": "https://api.github.com/users/ravindra-ut/orgs", "repos_url": "https://api.github.com/users/ravindra-ut/repos", "events_url": "https://api.github.com/users/ravindra-ut/events{/privacy}", "received_events_url": "https://api.github.com/users/ravindra-ut/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, @ravindra-ut.\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"glue\", \"sst2\")\r\nDownloading builder script: 28.8kB [00:00, 11.6MB/s] \r\nDownloading metadata: 28.7kB [00:00, 12.9MB/s] \r\nDownloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown size, total: 11.90 MiB) to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.44M/7.44M [00:01<00:00, 5.82MB/s]\r\nDataset glue downloaded and prepared to .../.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 895.96it/s]\r\n\r\nIn [3]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 1821\r\n })\r\n})\r\n``` \r\n\r\nMoreover, I see in your traceback that your error was for an URL at https://firebasestorage.googleapis.com\r\nHowever, the URLs were updated on Sep 16, 2020 (`datasets` version 1.0.2) to https://dl.fbaipublicfiles.com: https://github.com/huggingface/datasets/commit/2f03041a21c03abaececb911760c3fe4f420c229\r\n\r\nCould you please try to update `datasets`\r\n```shell\r\npip install -U datasets\r\n```\r\nand then force redownload\r\n```python\r\nds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\n```\r\nto update the cache?\r\n\r\nPlease, feel free to reopen this issue if the problem persists." ]
"2022-03-17T03:45:47"
"2022-03-17T07:10:15"
"2022-03-17T07:10:14"
NONE
null
## Describe the bug Checksum error for glue sst2, stsb, rte etc datasets ## Steps to reproduce the bug ```python >>> nlp.load_dataset('glue', 'sst2') Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 73.0/73.0 [00:00<00:00, 18.2kB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Expected results dataset load should succeed without checksum error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Environment info - `datasets` version: '1.18.3' - Platform: Mac OS - Python version: Python 3.8.9 - PyArrow version: '7.0.0'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3952/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3952/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3951
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3951/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3951/comments
https://api.github.com/repos/huggingface/datasets/issues/3951/events
https://github.com/huggingface/datasets/issues/3951
1,171,568,814
I_kwDODunzps5F1Liu
3,951
Forked streaming datasets try to `open` data urls rather than use network
{ "login": "dlwh", "id": 9633, "node_id": "MDQ6VXNlcjk2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlwh", "html_url": "https://github.com/dlwh", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "organizations_url": "https://api.github.com/users/dlwh/orgs", "repos_url": "https://api.github.com/users/dlwh/repos", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "received_events_url": "https://api.github.com/users/dlwh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting this second issue as well. We definitely want to make streaming datasets fully working in a distributed setup and with the best performance. Right now it only supports single process.\r\n\r\nIn this issue it seems that the streaming capabilities that we offer to dataset builders are not transferred to the forked process (so it fails to open remote files and start streaming data from them). In particular `open` is supposed to be mocked by our `xopen` function that is an extended open that supports remote files. Let me try to fix this" ]
"2022-03-16T21:21:02"
"2022-06-10T20:47:26"
"2022-06-10T20:47:26"
NONE
null
## Describe the bug Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else. ## Steps to reproduce the bug ```python from multiprocessing import freeze_support import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets import torch.utils.data # work around #3950 class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset): pass def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset: return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling) if __name__ == '__main__': freeze_support() ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) ds = _ensure_format(ds) model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results I'd expect the dataset to load the url correctly and produce examples. ## Actual results ``` warnings.warn( ***** Running training ***** Num examples = 8000 Num Epochs = 9223372036854775807 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 1000 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__ for key, example in self._iter(): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter yield from ex_iterable File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz' Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll pid, sts = os.waitpid(self.pid, flag) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15. 0%| | 0/1000 [00:02<?, ?it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3951/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3950/comments
https://api.github.com/repos/huggingface/datasets/issues/3950/events
https://github.com/huggingface/datasets/issues/3950
1,171,560,585
I_kwDODunzps5F1JiJ
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
{ "login": "dlwh", "id": 9633, "node_id": "MDQ6VXNlcjk2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dlwh", "html_url": "https://github.com/dlwh", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "organizations_url": "https://api.github.com/users/dlwh/orgs", "repos_url": "https://api.github.com/users/dlwh/repos", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "received_events_url": "https://api.github.com/users/dlwh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also taking a look at your second issue, which is more technical" ]
"2022-03-16T21:14:11"
"2022-06-10T20:47:26"
"2022-06-10T20:47:26"
NONE
null
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch") model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error. ## Actual results ``` 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__ return self._get_iterator() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__ w.start() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset' 0%| | 0/1000 [00:00<?, ?it/s] ``` This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together) Note that if you bypass this crash you get another crash. (I'll file a separate bug). ## Environment info - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3950/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3949
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3949/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3949/comments
https://api.github.com/repos/huggingface/datasets/issues/3949/events
https://github.com/huggingface/datasets/pull/3949
1,171,467,981
PR_kwDODunzps40jia-
3,949
Remove GLEU metric
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T19:35:31"
"2022-04-12T20:43:26"
"2022-04-12T20:37:09"
CONTRIBUTOR
null
Remove the GLEU metric as it is not actually implemented.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3949/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3949/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3949", "html_url": "https://github.com/huggingface/datasets/pull/3949", "diff_url": "https://github.com/huggingface/datasets/pull/3949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3949.patch", "merged_at": "2022-04-12T20:37:09" }
true
https://api.github.com/repos/huggingface/datasets/issues/3948
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3948/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3948/comments
https://api.github.com/repos/huggingface/datasets/issues/3948/events
https://github.com/huggingface/datasets/pull/3948
1,171,460,560
PR_kwDODunzps40jg1F
3,948
Google BLEU Metric Card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T19:27:17"
"2022-03-21T16:04:26"
"2022-03-21T16:04:25"
CONTRIBUTOR
null
Add metric card for Google BLEU (GLEU) metric One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing this would be a good thing to put on the to do list for the future.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3948/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3948/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3948", "html_url": "https://github.com/huggingface/datasets/pull/3948", "diff_url": "https://github.com/huggingface/datasets/pull/3948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3948.patch", "merged_at": "2022-03-21T16:04:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/3947
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3947/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3947/comments
https://api.github.com/repos/huggingface/datasets/issues/3947/events
https://github.com/huggingface/datasets/pull/3947
1,171,452,854
PR_kwDODunzps40jfLq
3,947
BLEU metric card
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T19:20:07"
"2022-03-29T14:59:50"
"2022-03-29T14:54:14"
CONTRIBUTOR
null
Add BLEU metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3947/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3947/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3947", "html_url": "https://github.com/huggingface/datasets/pull/3947", "diff_url": "https://github.com/huggingface/datasets/pull/3947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3947.patch", "merged_at": "2022-03-29T14:54:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/3945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3945/comments
https://api.github.com/repos/huggingface/datasets/issues/3945/events
https://github.com/huggingface/datasets/pull/3945
1,171,222,257
PR_kwDODunzps40ixmc
3,945
Fix comet metric
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T15:56:47"
"2022-03-22T15:10:12"
"2022-03-22T15:05:30"
MEMBER
null
The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed. This PR fixes the metric, updates the download_model mock and updates the doctest.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3945/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3945/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3945", "html_url": "https://github.com/huggingface/datasets/pull/3945", "diff_url": "https://github.com/huggingface/datasets/pull/3945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3945.patch", "merged_at": "2022-03-22T15:05:30" }
true
https://api.github.com/repos/huggingface/datasets/issues/3944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3944/comments
https://api.github.com/repos/huggingface/datasets/issues/3944/events
https://github.com/huggingface/datasets/pull/3944
1,171,209,510
PR_kwDODunzps40iu4n
3,944
Create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T15:46:26"
"2022-03-17T17:50:54"
"2022-03-17T17:47:05"
CONTRIBUTOR
null
Proposing COMET metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3944/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3944", "html_url": "https://github.com/huggingface/datasets/pull/3944", "diff_url": "https://github.com/huggingface/datasets/pull/3944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3944.patch", "merged_at": "2022-03-17T17:47:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3943/comments
https://api.github.com/repos/huggingface/datasets/issues/3943/events
https://github.com/huggingface/datasets/pull/3943
1,171,185,070
PR_kwDODunzps40ipnu
3,943
[Doc] Don't use v for version tags on GitHub
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T15:28:30"
"2022-03-17T11:46:26"
"2022-03-17T11:46:25"
MEMBER
null
This removes the `v` automatically used by `doc-builder` for versions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3943/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3943/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3943", "html_url": "https://github.com/huggingface/datasets/pull/3943", "diff_url": "https://github.com/huggingface/datasets/pull/3943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3943.patch", "merged_at": "2022-03-17T11:46:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/3942
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3942/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3942/comments
https://api.github.com/repos/huggingface/datasets/issues/3942/events
https://github.com/huggingface/datasets/issues/3942
1,171,177,122
I_kwDODunzps5Fzr6i
3,942
reddit_tifu dataset: Checksums didn't match for dataset source files
{ "login": "XingxingZhang", "id": 8507585, "node_id": "MDQ6VXNlcjg1MDc1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XingxingZhang", "html_url": "https://github.com/XingxingZhang", "followers_url": "https://api.github.com/users/XingxingZhang/followers", "following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions", "organizations_url": "https://api.github.com/users/XingxingZhang/orgs", "repos_url": "https://api.github.com/users/XingxingZhang/repos", "events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/XingxingZhang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi @XingxingZhang, \r\n\r\nWe have already fixed this. You should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nDuplicate of:\r\n- #3773", "thanks @albertvillanova . by upgrading to 1.18.4 and using `load_dataset(\"...\", download_mode=\"force_redownload\")` fixed \r\n the bug.\r\n\r\nusing the following as you suggested in another thread can also fixed the bug\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n", "The latter solution (installing from GitHub) was proposed because the fix was not released yet. But last week we made the 1.18.4 patch release (with the fix), so no longer necessary to install from GitHub.\r\n\r\nYou can now install from PyPI, as usual:\r\n```shell\r\npip install -U datasets\r\n```\r\n" ]
"2022-03-16T15:23:30"
"2022-03-16T15:57:43"
"2022-03-16T15:39:25"
NONE
null
## Describe the bug When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files" ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) # load_dataset('billsum') load_dataset('reddit_tifu', 'short') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3942/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3942/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3941/comments
https://api.github.com/repos/huggingface/datasets/issues/3941/events
https://github.com/huggingface/datasets/issues/3941
1,171,132,709
I_kwDODunzps5FzhEl
3,941
billsum dataset: Checksums didn't match for dataset source files:
{ "login": "XingxingZhang", "id": 8507585, "node_id": "MDQ6VXNlcjg1MDc1ODU=", "avatar_url": "https://avatars.githubusercontent.com/u/8507585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XingxingZhang", "html_url": "https://github.com/XingxingZhang", "followers_url": "https://api.github.com/users/XingxingZhang/followers", "following_url": "https://api.github.com/users/XingxingZhang/following{/other_user}", "gists_url": "https://api.github.com/users/XingxingZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/XingxingZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XingxingZhang/subscriptions", "organizations_url": "https://api.github.com/users/XingxingZhang/orgs", "repos_url": "https://api.github.com/users/XingxingZhang/repos", "events_url": "https://api.github.com/users/XingxingZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/XingxingZhang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @XingxingZhang, thanks for reporting.\r\n\r\nThis was due to a change in Google Drive service:\r\n- #3786 \r\n\r\nWe have already fixed it:\r\n- #3787\r\n\r\nYou should update `datasets` version to at least 1.18.4:\r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then force the redownload:\r\n```python\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```", "thanks @albertvillanova " ]
"2022-03-16T14:52:08"
"2022-03-16T15:57:08"
"2022-03-16T15:46:44"
NONE
null
## Describe the bug When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files" ``` File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx'] ``` ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) load_dataset('billsum') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3941/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3940
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3940/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3940/comments
https://api.github.com/repos/huggingface/datasets/issues/3940/events
https://github.com/huggingface/datasets/pull/3940
1,171,106,853
PR_kwDODunzps40iYxr
3,940
Create CoVAL metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T14:31:49"
"2022-03-18T17:37:59"
"2022-03-18T17:35:14"
CONTRIBUTOR
null
Initial CoVAL metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3940/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3940/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3940", "html_url": "https://github.com/huggingface/datasets/pull/3940", "diff_url": "https://github.com/huggingface/datasets/pull/3940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3940.patch", "merged_at": "2022-03-18T17:35:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/3939
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3939/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3939/comments
https://api.github.com/repos/huggingface/datasets/issues/3939/events
https://github.com/huggingface/datasets/issues/3939
1,170,882,331
I_kwDODunzps5Fyj8b
3,939
Source links broken
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting @qqaatw.\r\n\r\n@mishig25 @sgugger do you think this can be tweaked in the new doc framework?\r\n- From: https://github.com/huggingface/datasets/blob/v2.0.0/\r\n- To: https://github.com/huggingface/datasets/blob/2.0.0/", "@qqaatw thanks a lot for notifying about this issue!\r\n\r\nin comparison, transformers tags start with `v` like [this one](https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/models/bert/configuration_bert.py#L54).\r\n\r\nTherefore, we have to do one of 2 options below:\r\n1. Make necessary changes on doc-builder side\r\nOR\r\n2. Make [datasets tags](https://github.com/huggingface/datasets/tags) start with `v`, just like [transformers](https://github.com/huggingface/transformers/tags) (so that tag naming can be consistent amongst hf repos)\r\n\r\nI'll let you decide @albertvillanova @lhoestq @sgugger ", "I think option 2 is the easiest and would provide harmony in the HF ecosystem but we can also add a doc config parameter to decide whether the default version has a v or not if `datasets` folks prefer their tags without a v :-)", "For me it is OK to conform to the rest of libraries and tag/release with a preceding \"v\", rather than adding an extra argument to the doc builder just for `datasets`.\r\n\r\nLet me know if it is also OK for you @lhoestq. ", "https://github.com/huggingface/doc-build/commit/f41c1e8ff900724213af4c75d287d8b61ecf6141\r\n\r\nhotfix so that `datasets` docs source button works correctly on hf.co/docs/datasets", "We could add a tag for each release without a 'v' but it could be confusing on github to see both tags `v2.0.0` and `2.0.0` IMO (not sure if many users check them though). Removing the tags without 'v' would break our versioning for github datasets: the library looks for dataset scripts at the URLs like `https://raw.githubusercontent.com/huggingface/datasets/{revision}/datasets/{path}/{name}` where `revision` is equal to `datasets.__version__` (which doesn't start with a 'v') for all released versions of `datasets`.\r\n\r\nI think we could just have a parameter for the documentation - and having different URLs schemes for the source links that the users don't even see (they simply click on a button) is probably fine", "This is done in #3943 to go along with [doc-builder#146](https://github.com/huggingface/doc-builder/pull/146).\r\n\r\nNote that this will only work for future versions, so once those two are merged, the actual v2.0.0 doc should be fixed. The easiest is to cherry-pick this commit on the v2.0.0 release branch (or on a new branch created from the 2.0.0 tag, with a name that triggers the doc building job, for instance v2.0.0-release)", "Thanks for fixing @sgugger." ]
"2022-03-16T11:17:47"
"2022-03-19T04:41:32"
"2022-03-19T04:41:32"
CONTRIBUTOR
null
## Describe the bug The source links of v2.0.0 docs are broken: For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747` here, the `v2.0.0` should be `2.0.0`. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747` ## Actual results Described above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3939/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3939/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3938
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3938/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3938/comments
https://api.github.com/repos/huggingface/datasets/issues/3938/events
https://github.com/huggingface/datasets/pull/3938
1,170,875,417
PR_kwDODunzps40hnjM
3,938
Avoid info log messages from transformers in FrugalScore metric
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T11:11:29"
"2022-03-17T08:37:25"
"2022-03-17T08:37:24"
MEMBER
null
Fix #3928.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3938/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3938/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3938", "html_url": "https://github.com/huggingface/datasets/pull/3938", "diff_url": "https://github.com/huggingface/datasets/pull/3938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3938.patch", "merged_at": "2022-03-17T08:37:24" }
true
https://api.github.com/repos/huggingface/datasets/issues/3937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3937/comments
https://api.github.com/repos/huggingface/datasets/issues/3937/events
https://github.com/huggingface/datasets/issues/3937
1,170,832,006
I_kwDODunzps5FyXqG
3,937
Missing languages in lvwerra/github-code dataset
{ "login": "Eytan-S", "id": 38702500, "node_id": "MDQ6VXNlcjM4NzAyNTAw", "avatar_url": "https://avatars.githubusercontent.com/u/38702500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Eytan-S", "html_url": "https://github.com/Eytan-S", "followers_url": "https://api.github.com/users/Eytan-S/followers", "following_url": "https://api.github.com/users/Eytan-S/following{/other_user}", "gists_url": "https://api.github.com/users/Eytan-S/gists{/gist_id}", "starred_url": "https://api.github.com/users/Eytan-S/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Eytan-S/subscriptions", "organizations_url": "https://api.github.com/users/Eytan-S/orgs", "repos_url": "https://api.github.com/users/Eytan-S/repos", "events_url": "https://api.github.com/users/Eytan-S/events{/privacy}", "received_events_url": "https://api.github.com/users/Eytan-S/received_events", "type": "User", "site_admin": false }
[ { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
closed
false
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[ { "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ", "That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the pipeline anyway and will double check the query.\r\n\r\nThanks for reporting this @Eytan-S!", "Can confirm that the two languages are indeed missing from the dataset. Here are the file counts per language:\r\n```Python\r\n{'Assembly': 82847,\r\n 'Batchfile': 236755,\r\n 'C': 14127969,\r\n 'C#': 6793439,\r\n 'C++': 7368473,\r\n 'CMake': 175076,\r\n 'CSS': 1733625,\r\n 'Dockerfile': 331966,\r\n 'FORTRAN': 141963,\r\n 'GO': 2259363,\r\n 'Haskell': 340521,\r\n 'HTML': 11165464,\r\n 'Java': 19515696,\r\n 'JavaScript': 11829024,\r\n 'Julia': 58177,\r\n 'Lua': 576279,\r\n 'Makefile': 679338,\r\n 'Markdown': 8454049,\r\n 'PHP': 11181930,\r\n 'Perl': 497490,\r\n 'PowerShell': 136827,\r\n 'Python': 7203553,\r\n 'Ruby': 4479767,\r\n 'Rust': 321765,\r\n 'SQL': 655657,\r\n 'Scala': 0,\r\n 'Shell': 1382786,\r\n 'TypeScript': 0,\r\n 'TeX': 250764,\r\n 'Visual Basic': 155371}\r\n ```", "@Eytan-S check out v1.1 of the `github-code` dataset where issue should be fixed:\r\n\r\n| | Language |File Count| Size (GB)|\r\n|---:|:-------------|---------:|-------:|\r\n| 0 | Java | 19548190 | 107.7 |\r\n| 1 | C | 14143113 | 183.83 |\r\n| 2 | JavaScript | 11839883 | 87.82 |\r\n| 3 | HTML | 11178557 | 118.12 |\r\n| 4 | PHP | 11177610 | 61.41 |\r\n| 5 | Markdown | 8464626 | 23.09 |\r\n| 6 | C++ | 7380520 | 87.73 |\r\n| 7 | Python | 7226626 | 52.03 |\r\n| 8 | C# | 6811652 | 36.83 |\r\n| 9 | Ruby | 4473331 | 10.95 |\r\n| 10 | GO | 2265436 | 19.28 |\r\n| 11 | TypeScript | 1940406 | 24.59 |\r\n| 12 | CSS | 1734406 | 22.67 |\r\n| 13 | Shell | 1385648 | 3.01 |\r\n| 14 | Scala | 835755 | 3.87 |\r\n| 15 | Makefile | 679430 | 2.92 |\r\n| 16 | SQL | 656671 | 5.67 |\r\n| 17 | Lua | 578554 | 2.81 |\r\n| 18 | Perl | 497949 | 4.7 |\r\n| 19 | Dockerfile | 366505 | 0.71 |\r\n| 20 | Haskell | 340623 | 1.85 |\r\n| 21 | Rust | 322431 | 2.68 |\r\n| 22 | TeX | 251015 | 2.15 |\r\n| 23 | Batchfile | 236945 | 0.7 |\r\n| 24 | CMake | 175282 | 0.54 |\r\n| 25 | Visual Basic | 155652 | 1.91 |\r\n| 26 | FORTRAN | 142038 | 1.62 |\r\n| 27 | PowerShell | 136846 | 0.69 |\r\n| 28 | Assembly | 82905 | 0.78 |\r\n| 29 | Julia | 58317 | 0.29 |", "Thanks @lvwerra. " ]
"2022-03-16T10:32:03"
"2022-03-22T07:09:23"
"2022-03-21T14:50:47"
NONE
null
Hi, I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset! I've noticed that two languages are missing from the dataset: TypeScript and Scala. Looks like they're also omitted from the query you used to get the original code. Are there any plans to add them in the future? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3937/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3936
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3936/comments
https://api.github.com/repos/huggingface/datasets/issues/3936/events
https://github.com/huggingface/datasets/pull/3936
1,170,713,473
PR_kwDODunzps40hE-P
3,936
Fix Wikipedia version and re-add tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-16T08:48:04"
"2022-03-16T17:04:07"
"2022-03-16T17:04:05"
MEMBER
null
To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301": - de - en - fr - frr - it - simple These pre-processed data can be accessed, e.g.: ```python ds = load_dataset("wikipedia", "20220301.frr", split="train") ``` The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3936/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3936", "html_url": "https://github.com/huggingface/datasets/pull/3936", "diff_url": "https://github.com/huggingface/datasets/pull/3936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3936.patch", "merged_at": "2022-03-16T17:04:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/3934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3934/comments
https://api.github.com/repos/huggingface/datasets/issues/3934/events
https://github.com/huggingface/datasets/pull/3934
1,170,292,492
PR_kwDODunzps40ftiC
3,934
Create MAUVE metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T21:36:07"
"2022-03-18T17:38:14"
"2022-03-18T17:34:13"
CONTRIBUTOR
null
Proposing a MAUVE metric card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3934/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3934", "html_url": "https://github.com/huggingface/datasets/pull/3934", "diff_url": "https://github.com/huggingface/datasets/pull/3934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3934.patch", "merged_at": "2022-03-18T17:34:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/3933
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3933/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3933/comments
https://api.github.com/repos/huggingface/datasets/issues/3933/events
https://github.com/huggingface/datasets/pull/3933
1,170,253,605
PR_kwDODunzps40flNM
3,933
Update README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T20:52:05"
"2022-03-17T17:51:24"
"2022-03-17T17:47:37"
CONTRIBUTOR
null
Fixing missing triple quote
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3933/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3933/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3933", "html_url": "https://github.com/huggingface/datasets/pull/3933", "diff_url": "https://github.com/huggingface/datasets/pull/3933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3933.patch", "merged_at": "2022-03-17T17:47:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/3932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3932/comments
https://api.github.com/repos/huggingface/datasets/issues/3932/events
https://github.com/huggingface/datasets/pull/3932
1,170,221,773
PR_kwDODunzps40fd0T
3,932
Create SARI metric card
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T20:37:23"
"2022-03-18T17:37:01"
"2022-03-18T17:32:55"
CONTRIBUTOR
null
SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3932/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3932", "html_url": "https://github.com/huggingface/datasets/pull/3932", "diff_url": "https://github.com/huggingface/datasets/pull/3932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3932.patch", "merged_at": "2022-03-18T17:32:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/3931
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3931/comments
https://api.github.com/repos/huggingface/datasets/issues/3931/events
https://github.com/huggingface/datasets/pull/3931
1,170,097,208
PR_kwDODunzps40fBjx
3,931
Add align_labels_with_mapping docs
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
"2022-03-15T19:24:57"
"2022-03-18T16:28:31"
"2022-03-18T16:24:33"
MEMBER
null
This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko 🎉 ). For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3931/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3931", "html_url": "https://github.com/huggingface/datasets/pull/3931", "diff_url": "https://github.com/huggingface/datasets/pull/3931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3931.patch", "merged_at": "2022-03-18T16:24:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/3930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3930/comments
https://api.github.com/repos/huggingface/datasets/issues/3930/events
https://github.com/huggingface/datasets/pull/3930
1,170,087,793
PR_kwDODunzps40e_fb
3,930
Create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T19:16:59"
"2022-04-04T15:23:15"
"2022-04-04T15:17:28"
CONTRIBUTOR
null
Creating a README for IndicGLUE cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3930/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3930", "html_url": "https://github.com/huggingface/datasets/pull/3930", "diff_url": "https://github.com/huggingface/datasets/pull/3930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3930.patch", "merged_at": "2022-04-04T15:17:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/3929
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3929/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3929/comments
https://api.github.com/repos/huggingface/datasets/issues/3929/events
https://github.com/huggingface/datasets/issues/3929
1,170,066,235
I_kwDODunzps5Fvcs7
3,929
Load a local dataset twice
{ "login": "caush", "id": 28349961, "node_id": "MDQ6VXNlcjI4MzQ5OTYx", "avatar_url": "https://avatars.githubusercontent.com/u/28349961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/caush", "html_url": "https://github.com/caush", "followers_url": "https://api.github.com/users/caush/followers", "following_url": "https://api.github.com/users/caush/following{/other_user}", "gists_url": "https://api.github.com/users/caush/gists{/gist_id}", "starred_url": "https://api.github.com/users/caush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/caush/subscriptions", "organizations_url": "https://api.github.com/users/caush/orgs", "repos_url": "https://api.github.com/users/caush/repos", "events_url": "https://api.github.com/users/caush/events{/privacy}", "received_events_url": "https://api.github.com/users/caush/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_dataset(\"csv\", data_dir=\"data\")\r\n```\r\n\r\nAlternatively, you may also use:\r\n```python\r\ndataset = load_dataset(\"data\")" ]
"2022-03-15T18:59:26"
"2022-03-16T09:55:09"
"2022-03-16T09:54:06"
NONE
null
## Describe the bug Load a local "dataset" composed of two csv files twice. ## Steps to reproduce the bug Put the two joined files in a repository named "Data". Then in python: import datasets as ds ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'}) ## Expected results Should give something like (because files have only one data row): Title, clicks Truc et astuce, 123 Machin, 12 ## Actual results Gives Title, clicks Truc et astuce, 123 Machin, 12 Truc et astuce, 123 Machin, 12 ## Environment info [file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv) [file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv) - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3929/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3929/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3928
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3928/comments
https://api.github.com/repos/huggingface/datasets/issues/3928/events
https://github.com/huggingface/datasets/issues/3928
1,170,017,132
I_kwDODunzps5FvQts
3,928
Frugal score deprecations
{ "login": "ierezell", "id": 30974685, "node_id": "MDQ6VXNlcjMwOTc0Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ierezell", "html_url": "https://github.com/ierezell", "followers_url": "https://api.github.com/users/ierezell/followers", "following_url": "https://api.github.com/users/ierezell/following{/other_user}", "gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}", "starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ierezell/subscriptions", "organizations_url": "https://api.github.com/users/ierezell/orgs", "repos_url": "https://api.github.com/users/ierezell/repos", "events_url": "https://api.github.com/users/ierezell/events{/privacy}", "received_events_url": "https://api.github.com/users/ierezell/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. " ]
"2022-03-15T18:10:42"
"2022-03-17T08:37:24"
"2022-03-17T08:37:24"
NONE
null
## Describe the bug The frugal score returns a really verbose output with warnings that can be easily changed. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets.load import load_metric frugal = load_metric("frugalscore") frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"]) ``` ## Expected results A clear and concise description of the expected results. ``` {'scores': [0.9946]} ``` ## Actual results Specify the actual results or traceback. ``` PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 864.09ba/s] Using amp half precision backend The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Prediction ***** Num examples = 1 Batch size = 64 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4644.85it/s] {'scores': [0.9946]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3928/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3927/comments
https://api.github.com/repos/huggingface/datasets/issues/3927/events
https://github.com/huggingface/datasets/pull/3927
1,170,016,465
PR_kwDODunzps40ewN2
3,927
Update main readme
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T18:09:59"
"2022-03-29T10:13:47"
"2022-03-29T10:08:20"
MEMBER
null
The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3927/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3927", "html_url": "https://github.com/huggingface/datasets/pull/3927", "diff_url": "https://github.com/huggingface/datasets/pull/3927.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3927.patch", "merged_at": "2022-03-29T10:08:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/3926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3926/comments
https://api.github.com/repos/huggingface/datasets/issues/3926/events
https://github.com/huggingface/datasets/pull/3926
1,169,945,052
PR_kwDODunzps40ehVP
3,926
Doc maintenance
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
"2022-03-15T17:00:46"
"2022-03-15T19:27:15"
"2022-03-15T19:27:12"
MEMBER
null
This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3926/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3926", "html_url": "https://github.com/huggingface/datasets/pull/3926", "diff_url": "https://github.com/huggingface/datasets/pull/3926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3926.patch", "merged_at": "2022-03-15T19:27:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/3925
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3925/comments
https://api.github.com/repos/huggingface/datasets/issues/3925/events
https://github.com/huggingface/datasets/pull/3925
1,169,913,769
PR_kwDODunzps40eaq8
3,925
Fix main_classes docs index
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T16:33:46"
"2022-03-22T13:49:11"
"2022-03-22T13:44:04"
MEMBER
null
Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types ![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3925/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3925", "html_url": "https://github.com/huggingface/datasets/pull/3925", "diff_url": "https://github.com/huggingface/datasets/pull/3925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3925.patch", "merged_at": "2022-03-22T13:44:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/3924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3924/comments
https://api.github.com/repos/huggingface/datasets/issues/3924/events
https://github.com/huggingface/datasets/pull/3924
1,169,805,813
PR_kwDODunzps40eED5
3,924
Document cases for github datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T15:10:10"
"2022-04-05T18:33:15"
"2022-03-15T15:41:23"
MEMBER
null
In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases. I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github: - when you need the dataset to be reviewed - when you need long-term maintenance from the HF team - when there’s no clear org name / namespace that you can put the dataset under
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3924/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3924", "html_url": "https://github.com/huggingface/datasets/pull/3924", "diff_url": "https://github.com/huggingface/datasets/pull/3924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3924.patch", "merged_at": "2022-03-15T15:41:23" }
true
https://api.github.com/repos/huggingface/datasets/issues/3923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3923/comments
https://api.github.com/repos/huggingface/datasets/issues/3923/events
https://github.com/huggingface/datasets/pull/3923
1,169,773,869
PR_kwDODunzps40d9YU
3,923
Add methods to IterableDatasetDict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T14:46:03"
"2022-07-06T15:40:20"
"2022-03-15T16:45:06"
MEMBER
null
Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict: - map - filter - shuffle - with_format - cast - cast_column - remove_columns - rename_column - rename_columns
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3923/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3923", "html_url": "https://github.com/huggingface/datasets/pull/3923", "diff_url": "https://github.com/huggingface/datasets/pull/3923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3923.patch", "merged_at": "2022-03-15T16:45:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/3922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3922/comments
https://api.github.com/repos/huggingface/datasets/issues/3922/events
https://github.com/huggingface/datasets/pull/3922
1,169,761,293
PR_kwDODunzps40d6vm
3,922
Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T14:36:28"
"2022-03-15T16:07:04"
"2022-03-15T16:07:03"
MEMBER
null
Fix #2957
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3922/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3922/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3922", "html_url": "https://github.com/huggingface/datasets/pull/3922", "diff_url": "https://github.com/huggingface/datasets/pull/3922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3922.patch", "merged_at": "2022-03-15T16:07:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/3921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3921/comments
https://api.github.com/repos/huggingface/datasets/issues/3921/events
https://github.com/huggingface/datasets/pull/3921
1,169,749,338
PR_kwDODunzps40d4Mk
3,921
Fix NonMatchingChecksumError in CRD3 dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-15T14:27:14"
"2022-03-15T15:54:27"
"2022-03-15T15:54:26"
MEMBER
null
Fix #3051
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3921/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3921", "html_url": "https://github.com/huggingface/datasets/pull/3921", "diff_url": "https://github.com/huggingface/datasets/pull/3921.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3921.patch", "merged_at": "2022-03-15T15:54:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/3920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3920/comments
https://api.github.com/repos/huggingface/datasets/issues/3920/events
https://github.com/huggingface/datasets/issues/3920
1,169,532,807
I_kwDODunzps5FtaeH
3,920
'datasets.features' is not a package
{ "login": "Arij-Aladel", "id": 68355048, "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arij-Aladel", "html_url": "https://github.com/Arij-Aladel", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets", "The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply" ]
"2022-03-15T11:14:23"
"2022-03-16T09:17:12"
"2022-03-16T09:17:12"
NONE
null
@albertvillanova python 3.9 os: ubuntu 20.04 In conda environment torch installed by ```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html``` datasets package is installed by ``` /env/bin/pip install datasets==1.8.0 ``` During runing the code I have this error ``` [6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class [6]<stderr>: return super().find_class(mod_name, name) [6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` precisely this error appears when torch.load('data_file.pt') ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load result = unpickler.load() File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class return super().find_class(mod_name, name) ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` Why I am getting this error?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3920/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3919/comments
https://api.github.com/repos/huggingface/datasets/issues/3919/events
https://github.com/huggingface/datasets/issues/3919
1,169,497,210
I_kwDODunzps5FtRx6
3,919
AttributeError: 'DatasetDict' object has no attribute 'features'
{ "login": "jswapnil10", "id": 48145785, "node_id": "MDQ6VXNlcjQ4MTQ1Nzg1", "avatar_url": "https://avatars.githubusercontent.com/u/48145785?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jswapnil10", "html_url": "https://github.com/jswapnil10", "followers_url": "https://api.github.com/users/jswapnil10/followers", "following_url": "https://api.github.com/users/jswapnil10/following{/other_user}", "gists_url": "https://api.github.com/users/jswapnil10/gists{/gist_id}", "starred_url": "https://api.github.com/users/jswapnil10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jswapnil10/subscriptions", "organizations_url": "https://api.github.com/users/jswapnil10/orgs", "repos_url": "https://api.github.com/users/jswapnil10/repos", "events_url": "https://api.github.com/users/jswapnil10/events{/privacy}", "received_events_url": "https://api.github.com/users/jswapnil10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`. \r\n\r\nFor example \r\n\r\n```python \r\nds = load_dataset('mnist')\r\nds.features\r\n```\r\nReturns \r\n```python\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()\r\n----> 1 ds.features\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nIf we look at the dataset variable, we see it is a `DatasetDict`:\r\n\r\n```python \r\nprint(ds)\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 60000\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nWe can grab the features from a split by indexing into `train`:\r\n```python\r\nds['train'].features\r\n{'image': Image(decode=True, id=None),\r\n 'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}\r\n```\r\n\r\nHope that helps ", "Yes, Thanks for that clarification," ]
"2022-03-15T10:46:59"
"2022-03-17T04:16:14"
"2022-03-17T04:16:14"
NONE
null
## Describe the bug Receiving the error when trying to check for Dataset features ## Steps to reproduce the bug from datasets import Dataset dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']]) dataset.features ## Expected results A clear and concise description of the expected results. ## Actual results Getting the following errror AttributeError: 'DatasetDict' object has no attribute 'features' ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.18.4 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3919/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3919/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3918/comments
https://api.github.com/repos/huggingface/datasets/issues/3918/events
https://github.com/huggingface/datasets/issues/3918
1,169,366,117
I_kwDODunzps5Fsxxl
3,918
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
{ "login": "willowdong", "id": 51409295, "node_id": "MDQ6VXNlcjUxNDA5Mjk1", "avatar_url": "https://avatars.githubusercontent.com/u/51409295?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willowdong", "html_url": "https://github.com/willowdong", "followers_url": "https://api.github.com/users/willowdong/followers", "following_url": "https://api.github.com/users/willowdong/following{/other_user}", "gists_url": "https://api.github.com/users/willowdong/gists{/gist_id}", "starred_url": "https://api.github.com/users/willowdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/willowdong/subscriptions", "organizations_url": "https://api.github.com/users/willowdong/orgs", "repos_url": "https://api.github.com/users/willowdong/repos", "events_url": "https://api.github.com/users/willowdong/events{/privacy}", "received_events_url": "https://api.github.com/users/willowdong/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
[ "Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "You should force redownload:\r\n```python\r\ndataset = load_dataset(\"multi_news\", download_mode=\"force_redownload\")\r\ndataset_2 = load_dataset(\"reddit_tifu\", \"long\", download_mode=\"force_redownload\")", "Fixed by:\r\n- #3787 \r\n- #3843" ]
"2022-03-15T08:53:45"
"2022-03-16T15:36:58"
"2022-03-15T14:01:25"
NONE
null
## Describe the bug Can't load the dataset ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('multi_news') dataset_2=load_dataset("reddit_tifu", "long") ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info - `datasets` version: 1.18.4 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.0 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3918/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3918/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3917/comments
https://api.github.com/repos/huggingface/datasets/issues/3917/events
https://github.com/huggingface/datasets/pull/3917
1,168,906,154
PR_kwDODunzps40bGZA
3,917
Create README.md
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T21:08:10"
"2022-03-17T17:45:39"
"2022-03-17T17:45:39"
CONTRIBUTOR
null
This follows the same structure as the GLUE metric card, hope that works for everyone :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3917/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3917", "html_url": "https://github.com/huggingface/datasets/pull/3917", "diff_url": "https://github.com/huggingface/datasets/pull/3917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3917.patch", "merged_at": "2022-03-17T17:45:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/3916
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3916/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3916/comments
https://api.github.com/repos/huggingface/datasets/issues/3916/events
https://github.com/huggingface/datasets/pull/3916
1,168,869,191
PR_kwDODunzps40a-cR
3,916
Create README.md for GLUE
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T20:27:22"
"2022-03-15T17:06:57"
"2022-03-15T17:06:56"
CONTRIBUTOR
null
I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify. Also tagging @yjernite for the Limitations section. Happy to hear your thoughts!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3916/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3916/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3916", "html_url": "https://github.com/huggingface/datasets/pull/3916", "diff_url": "https://github.com/huggingface/datasets/pull/3916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3916.patch", "merged_at": "2022-03-15T17:06:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/3915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3915/comments
https://api.github.com/repos/huggingface/datasets/issues/3915/events
https://github.com/huggingface/datasets/pull/3915
1,168,848,101
PR_kwDODunzps40a54e
3,915
Metric card template
{ "login": "emibaylor", "id": 27527747, "node_id": "MDQ6VXNlcjI3NTI3NzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emibaylor", "html_url": "https://github.com/emibaylor", "followers_url": "https://api.github.com/users/emibaylor/followers", "following_url": "https://api.github.com/users/emibaylor/following{/other_user}", "gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}", "starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions", "organizations_url": "https://api.github.com/users/emibaylor/orgs", "repos_url": "https://api.github.com/users/emibaylor/repos", "events_url": "https://api.github.com/users/emibaylor/events{/privacy}", "received_events_url": "https://api.github.com/users/emibaylor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T20:07:08"
"2022-05-04T10:44:09"
"2022-05-04T10:37:06"
CONTRIBUTOR
null
Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!). All feedback is welcome, but am especially curious about feedback in terms of: - things that should be included but aren't - things that are included but should be changed or removed - the instructions I included, and whether they should be added to, clarified, or deleted altogether
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3915/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3915/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3915", "html_url": "https://github.com/huggingface/datasets/pull/3915", "diff_url": "https://github.com/huggingface/datasets/pull/3915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3915.patch", "merged_at": "2022-05-04T10:37:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/3914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3914/comments
https://api.github.com/repos/huggingface/datasets/issues/3914/events
https://github.com/huggingface/datasets/pull/3914
1,168,777,880
PR_kwDODunzps40aq2r
3,914
Use templates for doc-builidng jobs
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T18:53:06"
"2022-03-17T15:02:59"
"2022-03-17T15:02:58"
MEMBER
null
This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-) Note: all libraries must share the same docker image for those doc-building jobs. For now, all the one used (`huggingface/transformers-doc-builder`) contains all extra steps of the datasets install for docbuling (mainly libsndfile) but if in the future some additional steps are necessary on top of `pip install -e .[dev]`, this docker image will need to be updated with the extra deps.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3914/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3914", "html_url": "https://github.com/huggingface/datasets/pull/3914", "diff_url": "https://github.com/huggingface/datasets/pull/3914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3914.patch", "merged_at": "2022-03-17T15:02:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/3913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3913/comments
https://api.github.com/repos/huggingface/datasets/issues/3913/events
https://github.com/huggingface/datasets/pull/3913
1,168,723,950
PR_kwDODunzps40afYJ
3,913
Deterministic split order in DatasetDict.map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T17:58:37"
"2022-03-15T10:45:15"
"2022-03-15T10:45:15"
MEMBER
null
The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed Close https://github.com/huggingface/datasets/issues/3847
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3913/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3913", "html_url": "https://github.com/huggingface/datasets/pull/3913", "diff_url": "https://github.com/huggingface/datasets/pull/3913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3913.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3912
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3912/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3912/comments
https://api.github.com/repos/huggingface/datasets/issues/3912/events
https://github.com/huggingface/datasets/pull/3912
1,168,720,098
PR_kwDODunzps40aekr
3,912
add draft of registering function for pandas
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/followers", "following_url": "https://api.github.com/users/lvwerra/following{/other_user}", "gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}", "starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions", "organizations_url": "https://api.github.com/users/lvwerra/orgs", "repos_url": "https://api.github.com/users/lvwerra/repos", "events_url": "https://api.github.com/users/lvwerra/events{/privacy}", "received_events_url": "https://api.github.com/users/lvwerra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T17:54:29"
"2023-01-24T12:57:35"
"2023-01-24T12:57:10"
MEMBER
null
This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub. Here is an example: ```python import pandas as pd from datasets import register_pandas register_pandas() # push to hub df = pd.DataFrame.from_dict({"test": [1,2,3]}) df.push_to_hub("my_test") # load from hub df_retrieved = pd.DataFrame.load_from_hub("lvwerra/my_test") ``` It follows a similar philosophy as the `tqdm` [integration](https://github.com/tqdm/tqdm#pandas-integration). Also see [this issue](https://github.com/pandas-dev/pandas/issues/46000) on the `pandas` repository. This is just a rough draft of what such integration could look like but I would like appreciate some feedback on this: is this something you would like to add the library and is this the way to go? cc @lhoestq @albertvillanova @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3912/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3912/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3912", "html_url": "https://github.com/huggingface/datasets/pull/3912", "diff_url": "https://github.com/huggingface/datasets/pull/3912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3912.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3911
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3911/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3911/comments
https://api.github.com/repos/huggingface/datasets/issues/3911/events
https://github.com/huggingface/datasets/pull/3911
1,168,652,374
PR_kwDODunzps40aQHz
3,911
Create README.md for CER metric
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2022-03-14T16:54:51"
"2022-03-17T17:49:40"
"2022-03-17T17:45:54"
CONTRIBUTOR
null
Initial proposal for a CER metric card cc @patrickvonplaten - wdyt this time around? :smile:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3911/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3911/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3911", "html_url": "https://github.com/huggingface/datasets/pull/3911", "diff_url": "https://github.com/huggingface/datasets/pull/3911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3911.patch", "merged_at": "2022-03-17T17:45:54" }
true