url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 758M
1.95B
| node_id
stringlengths 18
32
| number
int64 1.2k
6.31k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
3
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
36.2k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1414/comments | https://api.github.com/repos/huggingface/datasets/issues/1414/events | https://github.com/huggingface/datasets/pull/1414 | 760,622,133 | MDExOlB1bGxSZXF1ZXN0NTM1NDIzODgy | 1,414 | Adding BioCreative II Gene Mention corpus | {
"avatar_url": "https://avatars.githubusercontent.com/u/10516432?v=4",
"events_url": "https://api.github.com/users/mahajandiwakar/events{/privacy}",
"followers_url": "https://api.github.com/users/mahajandiwakar/followers",
"following_url": "https://api.github.com/users/mahajandiwakar/following{/other_user}",
"gists_url": "https://api.github.com/users/mahajandiwakar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mahajandiwakar",
"id": 10516432,
"login": "mahajandiwakar",
"node_id": "MDQ6VXNlcjEwNTE2NDMy",
"organizations_url": "https://api.github.com/users/mahajandiwakar/orgs",
"received_events_url": "https://api.github.com/users/mahajandiwakar/received_events",
"repos_url": "https://api.github.com/users/mahajandiwakar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mahajandiwakar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahajandiwakar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mahajandiwakar"
} | [] | closed | false | null | [] | null | [] | "2020-12-09T19:49:28Z" | "2020-12-11T11:17:40Z" | "2020-12-11T11:17:40Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1414.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1414",
"merged_at": "2020-12-11T11:17:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1414.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1414"
} | Adding BioCreative II Gene Mention corpus | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1414/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1414/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1657/comments | https://api.github.com/repos/huggingface/datasets/issues/1657/events | https://github.com/huggingface/datasets/pull/1657 | 775,647,000 | MDExOlB1bGxSZXF1ZXN0NTQ2Mjg1NjU2 | 1,657 | mac_morpho dataset: add data splits info | {
"avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4",
"events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}",
"followers_url": "https://api.github.com/users/jonatasgrosman/followers",
"following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}",
"gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonatasgrosman",
"id": 5097052,
"login": "jonatasgrosman",
"node_id": "MDQ6VXNlcjUwOTcwNTI=",
"organizations_url": "https://api.github.com/users/jonatasgrosman/orgs",
"received_events_url": "https://api.github.com/users/jonatasgrosman/received_events",
"repos_url": "https://api.github.com/users/jonatasgrosman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonatasgrosman"
} | [] | closed | false | null | [] | null | [] | "2020-12-29T01:05:21Z" | "2020-12-30T16:51:24Z" | "2020-12-30T16:51:24Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1657.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1657",
"merged_at": "2020-12-30T16:51:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1657.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1657"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1657/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1338/comments | https://api.github.com/repos/huggingface/datasets/issues/1338/events | https://github.com/huggingface/datasets/pull/1338 | 759,725,770 | MDExOlB1bGxSZXF1ZXN0NTM0Njc5ODcz | 1,338 | Add GigaFren Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [] | closed | false | null | [] | null | [
"@lhoestq fixed"
] | "2020-12-08T19:42:04Z" | "2020-12-14T10:03:47Z" | "2020-12-14T10:03:46Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1338",
"merged_at": "2020-12-14T10:03:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1338"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1338/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/1470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1470/comments | https://api.github.com/repos/huggingface/datasets/issues/1470/events | https://github.com/huggingface/datasets/pull/1470 | 761,791,065 | MDExOlB1bGxSZXF1ZXN0NTM2NDA2MjQx | 1,470 | Add wiki lingua dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7674948?v=4",
"events_url": "https://api.github.com/users/katnoria/events{/privacy}",
"followers_url": "https://api.github.com/users/katnoria/followers",
"following_url": "https://api.github.com/users/katnoria/following{/other_user}",
"gists_url": "https://api.github.com/users/katnoria/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/katnoria",
"id": 7674948,
"login": "katnoria",
"node_id": "MDQ6VXNlcjc2NzQ5NDg=",
"organizations_url": "https://api.github.com/users/katnoria/orgs",
"received_events_url": "https://api.github.com/users/katnoria/received_events",
"repos_url": "https://api.github.com/users/katnoria/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/katnoria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katnoria/subscriptions",
"type": "User",
"url": "https://api.github.com/users/katnoria"
} | [] | closed | false | null | [] | null | [
"it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\nwhich i think is not the dataset you are doing a PR for. Try rebasing with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push -u -f origin your_branch\r\n```",
"> it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\n> which i think is not the dataset you are doing a PR for. Try rebasing with:\r\n> \r\n> ```\r\n> git fetch upstream\r\n> git rebase upstream/master\r\n> git push -u -f origin your_branch\r\n> ```\r\n\r\nThanks, my branch seems to be up to date. \r\n```Current branch add-wiki-lingua-dataset is up to date.```",
"Also where do the google drive urls come from ?",
"looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n\r\nCan you create another branch and another PR ?\r\n(or you can try to fix this branch with rebase and push force if you're familiar with it)",
"Thanks for fixing the dummy data and removing the glob call :) ",
"> looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n> \r\n> Can you create another branch and another PR ?\r\n> (or you can try to fix this branch with rebase and push force if you're familiar with it)\r\n\r\nEasier to create a new branch and submit, I have submitted a new PR #1582 ",
"Closing this one in favor of #1582 "
] | "2020-12-11T02:04:18Z" | "2020-12-16T15:27:13Z" | "2020-12-16T15:27:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1470",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1470"
} | Hello @lhoestq ,
I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1470/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1916/comments | https://api.github.com/repos/huggingface/datasets/issues/1916/events | https://github.com/huggingface/datasets/pull/1916 | 812,291,984 | MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5 | 1,916 | Remove unused py_utils objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?",
"Sorry @lhoestq, I forgot to update the imports... :/",
"It's fine, the CI should have caught this tbh. Not sure why it did't fail"
] | "2021-02-19T19:51:25Z" | "2021-02-22T14:56:56Z" | "2021-02-22T13:32:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1916.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1916",
"merged_at": "2021-02-22T13:32:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1916.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1916"
} | Remove unused/unnecessary py_utils functions/classes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1916/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4017/comments | https://api.github.com/repos/huggingface/datasets/issues/4017/events | https://github.com/huggingface/datasets/pull/4017 | 1,180,595,160 | PR_kwDODunzps41Ad_L | 4,017 | Support streaming scan dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-25T10:11:28Z" | "2022-03-25T12:08:55Z" | "2022-03-25T12:03:52Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4017.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4017",
"merged_at": "2022-03-25T12:03:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4017.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4017"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4017/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3260/comments | https://api.github.com/repos/huggingface/datasets/issues/3260/events | https://github.com/huggingface/datasets/pull/3260 | 1,052,247,373 | PR_kwDODunzps4ueCIU | 3,260 | Fix ConnectionError in Scielo dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"The CI error is unrelated to the change."
] | "2021-11-12T18:02:37Z" | "2021-11-16T18:18:17Z" | "2021-11-16T17:55:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3260.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3260",
"merged_at": "2021-11-16T17:55:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3260.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3260"
} | This PR:
* allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint)
* makes the Scielo dataset streamable
Fixes #3255. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3260/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5764/comments | https://api.github.com/repos/huggingface/datasets/issues/5764/events | https://github.com/huggingface/datasets/issues/5764 | 1,670,740,198 | I_kwDODunzps5jlXjm | 5,764 | ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sauravtii",
"id": 109907638,
"login": "sauravtii",
"node_id": "U_kgDOBo0Otg",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sauravtii"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.",
"Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```",
"Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```",
"That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|███████| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|█████████████| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|███████████████| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|███████████████████| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|█████████████████████████████████████████| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?",
"That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`."
] | "2023-04-17T09:08:18Z" | "2023-04-18T07:18:20Z" | "2023-04-18T07:18:20Z" | NONE | null | null | null | ### Describe the bug
I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code:
```
dataset = load_dataset("josianem/imdb")
```
The dataset is not getting loaded and gives the error message as the following:
```
Traceback (most recent call last):
File "sample.py", line 3, in <module>
dataset = load_dataset("josianem/imdb")
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators
archive = dl_manager.download(_DOWNLOAD_URL)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path
output_path = get_from_cache(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
```
### Steps to reproduce the bug
You can reproduce the error by using the following code:
```
from datasets import load_dataset, load_metric
dataset = load_dataset("josianem/imdb")
```
### Expected behavior
The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior).
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5764/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4041/comments | https://api.github.com/repos/huggingface/datasets/issues/4041/events | https://github.com/huggingface/datasets/issues/4041 | 1,183,599,461 | I_kwDODunzps5GjEtl | 4,041 | Add support for IIIF in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davanstrien",
"id": 8995957,
"login": "davanstrien",
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davanstrien"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi! Thanks for the detailed analysis of adding IIIF support. I like the idea of \"using IIIF through datasets scripts\" due to its ease of use. Another approach that I like is yielding image ids and using the `piffle` library (which offers a bit more flexibility) + `map` to download + cache images. We can handle bad URLs in `map` by returning `None`. Plus, we can add a `Dataset Preprocessing` section with the code that explains this approach to the card of such datasets. WDYT?\r\n\r\n> currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.\r\n\r\nThis is why (currently) adding a new feature type would be overkill, IMO.\r\n"
] | "2022-03-28T15:19:25Z" | "2022-04-05T18:20:53Z" | null | MEMBER | null | null | null | This is a feature request for support for IIIF in `datasets`. Apologies for the long issue. I have also used a different format to the usual feature request since I think that makes more sense but happy to use the standard template if preferred.
## What is [IIIF](https://iiif.io/)?
IIIF (International Image Interoperability Framework)
> is a set of open standards for delivering high-quality, attributed digital objects online at scale. It’s also an international community developing and implementing the IIIF APIs. IIIF is backed by a consortium of leading cultural institutions.
The tl;dr is that IIIF provides various specifications for implementing useful functionality for:
- Institutions to make available images for various use cases
- Users to have a consistent way of interacting/requesting these images
- For developers to have a common standard for developing tools for working with IIIF images that will work across all institutions that implement a particular IIIF standard (for example the image viewer for the BNF can also work for the Library of Congress if they both use IIIF).
Some institutions that various levels of support IIF include: The British Library, Internet Archive, Library of Congress, Wikidata. There are also many smaller institutions that have IIIF support. An incomplete list can be found here: https://iiif.io/guides/finding_resources/
## IIIF APIs
IIIF consists of a number of APIs which could be integrated with datasets. I think the most obvious candidate for inclusion would be the [Image API](https://iiif.io/api/image/3.0/)
### IIIF Image API
The Image API https://iiif.io/api/image/3.0/ is likely the most suitable first candidate for integration with datasets. The Image API offers a consistent protocol for requesting images via a URL:
```{scheme}://{server}{/prefix}/{identifier}/{region}/{size}/{rotation}/{quality}.{format}```
A concrete example of this:
```https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg```
As you can see the scheme offers a number of options that can be specified in the URL, for example, size. Using the example URL we return:
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/full/0/default.jpg)
We can change the size to request a size of 250 by 250, this is done by changing the size from `full` to `250,250` i.e. switching the URL to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/250,250/0/default.jpg)
We can also request the image with max width 250, max height 250 whilst maintaining the aspect ratio using `!w,h`. i.e. change the url to `https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg`
![](https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg)
A full overview of the options for size can be found here: https://iiif.io/api/image/3.0/#42-size
## Why would/could this be useful for datasets?
There are a few reasons why support for the IIIF Image API could be useful. Broadly the ability to have more control over how an image is returned from a server is useful for many ML workflows:
- images can be requested in the right size, this prevents having to download/stream large images when the actual desired size is much smaller
- can select a subset of an image: it is possible to select a sub-region of an image, this could be useful for example when you already have a bounding box for a subset of an image and then want to use this subset of an image for another task. For example, https://github.com/Living-with-machines/nnanno uses IIIF to request parts of a newspaper image that have been detected as 'photograph', 'illustration' etc for downstream use.
- options for quality, rotation, the format can all be encoded in the URL request.
These may become particularly useful when pre-training models on large image datasets where the cost of downloading images with 1600 pixel width when you actually want 240 has a larger impact.
## What could this look like in datasets?
I think there are various ways in which support for IIIF could potentially be included in `datasets`. These suggestions aren't fully fleshed out but hopefully, give a sense of possible approaches that match existing `datasets` methods in their approach.
### Use through datasets scripts
Loading images via URL is already supported. There are a few possible 'extras' that could be included when using IIIF. One option is to leverage the IIIF protocol in datasets scripts, i.e. the dataset script can expose the IIIF options via the dataset script:
```python
ds = load_dataset("iiif_dataset", image_size="250,250", fmt="jpg")
```
This is already possible. The approach to parsing the IIIF URLs would be left to the person creating the dataset script.
### Support through dataset scripts (with some datasets support)
This is similar to the above but `datasets` would offer some way of saying this is a iiif URL and then expose the options associated with IIIF images automatically. i.e. if you did something like:
```python
features = {"label": ClassLabel(names=['dog','cat']),
"url": datasets.IIIFURL()}
```
inside your loading script, you would automatically have exposed `size`, `fmt` etc. options when loading the dataset.
### Other possible integrations
Some other possible pseudocode ways that a user could interact with IIIF URLs:
The ability to cast to an `IIIFImage` feature type:
```
ds.cast_column('url', IIIFImage, download=False)
```
The ability to specify some options associated with IIIF urls.
```
ds = ds.set_iiif_options(column='url', size="250,250")
```
I think all of these would rely on having an `IIIFImage` feature type - this would be a little bit of a Frankenstein between a `string` and `datasets.Image`. I think most of the actual image behaviour would be exactly the same as `datasets.Image`, the difference would be that the underlying URL could be modified in various ways.
## prerequisite requirements
There are a few pre-requisites that I can anticipate. This doesn't cover a full implementation of IIIF support which would have different requirements depending on the approach taken to implementing IIIF. Some of these features would be useful independently of adding IIIF support:
### support for handling failed images loaded via a URL (or a specific IIIFImage feature).
Working with images via web requests will inevitably return the odd failed request. If these images are then requests and don't return it would be useful to have a `None` returned instead of an error. For example, when using `push_to_hub` `datasets` will try and include the image but currently fails with bad URLs.
```python
from datasets import Dataset
import datasets
urls = ['https://stacks.stanford.edu/image/iiif/hg676jb4964%2F0380_796-44/full/!250,250/0/default.jpg']*3
urls.append("badurl.com/image.jpg")
data = {"url":urls}
ds = Dataset.from_dict(data)
ds = ds.cast_column('url', datasets.Image())
ds[3]['url']
```
returns a `FileNotFoundError`, for streaming large datasets of images using their URLs it could be useful to have `None` returned instead. This has implications for the actual training loop i.e. you now need to somehow skip those examples because of this it might not be desirable to support this.
### Caching support
Since IIIF requests images via a URL it would be great to have a way of not requesting the images multiple times. This is tracked in https://github.com/huggingface/datasets/issues/3142 and I think this would also be very desirable to have here particularly as one of the primary use cases of IIIF may be to do unsupervised pre-training on large datasets of IIIF URLs.
### Support for Parsing IIIF URLs
This gets closer to the actual implementation. Here the requirement would be some way for `datasets` to parse a URL that the users specify is an IIIF URL. An example of a Python library that does this: https://github.com/Princeton-CDH/piffle. I also have a rough version that uses `dataclasses` which I can share.
## Why it might not be worthwhile/suitable for datasets
There are some reasons that this might not be worth implementing:
- currently, IIIF is mainly used by cultural heritage organizations (museums, archives etc.) The adoption of IIIF in this sector has been growing but it's possible that adoption won't be extended to other industries which may also be a source of image data for training ML models.
- It may end up being better to leave this to the user. It would for example be possible for someone to write map functions to change an IIIF URL to the correct size etc. Adding direct support for IIIF in datasets may potentially not be worth the trouble.
- The impact of different approaches to doing image scaling can impact the downstream model's performance, see: https://twitter.com/wightmanr/status/1479528581466243073?s=20. Since different IIIF image servers may implement different approaches to resizing images this could have a downstream impact on model performance. think this is something that could be flagged to the end-user in the documentation. This probably also falls into general "gotchas" that probably aren't the `datasets` libraries' role to protect users from.
Some of the requirements outlined above would be useful for images anyway. These could be implemented prior to a final decision about whether IIIF support could/should be added to datasets.
## Suggested next steps:
I realise this is a long and slightly open-ended issue. I am happy to clarify/answer questions on IIIF and possible integrations. If the prerequisite requirements seem worth exploring/are better explored in their own issues let me know and I can open new issues for those.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4041/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3438/comments | https://api.github.com/repos/huggingface/datasets/issues/3438/events | https://github.com/huggingface/datasets/pull/3438 | 1,081,302,203 | PR_kwDODunzps4v52Va | 3,438 | Update supported versions of Python in setup.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | "2021-12-15T17:30:12Z" | "2021-12-20T14:22:13Z" | "2021-12-20T14:22:12Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3438.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3438",
"merged_at": "2021-12-20T14:22:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3438.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3438"
} | Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4631/comments | https://api.github.com/repos/huggingface/datasets/issues/4631/events | https://github.com/huggingface/datasets/pull/4631 | 1,293,545,900 | PR_kwDODunzps460Vy0 | 4,631 | Update WinoBias README | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-07-04T20:24:40Z" | "2022-07-07T13:23:32Z" | "2022-07-07T13:11:47Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4631",
"merged_at": "2022-07-07T13:11:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4631"
} | I'm adding some information about Winobias that I got from the paper :smile:
I think this makes it a bit clearer! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4631/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4690/comments | https://api.github.com/repos/huggingface/datasets/issues/4690/events | https://github.com/huggingface/datasets/pull/4690 | 1,306,321,975 | PR_kwDODunzps47fG6w | 4,690 | Refactor base extractors | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-07-15T17:47:48Z" | "2022-07-18T08:46:56Z" | "2022-07-18T08:34:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4690.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4690",
"merged_at": "2022-07-18T08:34:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4690.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4690"
} | This PR:
- Refactors base extractors as subclasses of `BaseExtractor`:
- this is an abstract class defining the interface with:
- `is_extractable`: abstract class method
- `extract`: abstract static method
- Implements abstract `MagicNumberBaseExtractor` (as subclass of `BaseExtractor`):
- this has a default implementation of `is_extractable`
- this improves performance (reducing the number of file reads) by allowing passing already read `magic_number`
- Refactors `Extractor`:
- reads magic number from file only once
This PR deprecates:
```python
is_extractable, extractor = self.extractor.is_extractable(input_path, return_extractor=True)
self.extractor.extract(input_path, output_path, extractor=extractor)
```
and uses more Pythonic instead:
```python
extractor_format = self.extractor.infer_extractor_format(input_path)
self.extractor.extract(input_path, output_path, extractor_format)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4690/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2620/comments | https://api.github.com/repos/huggingface/datasets/issues/2620/events | https://github.com/huggingface/datasets/pull/2620 | 940,893,389 | MDExOlB1bGxSZXF1ZXN0Njg2ODk3MDky | 2,620 | Add speech processing tasks | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [
"Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?",
"> Are there any `task_categories:automatic-speech-recognition` dataset for which we should update the tags ?\r\n\r\nYes there's a few - I'll fix them tomorrow :)"
] | "2021-07-09T16:07:29Z" | "2021-07-12T18:32:59Z" | "2021-07-12T17:32:02Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2620.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2620",
"merged_at": "2021-07-12T17:32:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2620.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2620"
} | This PR replaces the `automatic-speech-recognition` task category with a broader `speech-processing` category.
The tasks associated with this category are derived from the [SUPERB benchmark](https://arxiv.org/abs/2105.01051), and ASR is included in this set. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2620/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/silverriver",
"id": 2529049,
"login": "silverriver",
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"repos_url": "https://api.github.com/users/silverriver/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"type": "User",
"url": "https://api.github.com/users/silverriver"
} | [] | closed | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.",
"> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co/datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?",
"Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co/datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co/datasets/silver/personal_dialog as well"
] | "2022-06-01T01:20:40Z" | "2022-06-11T12:40:23Z" | "2022-06-11T12:31:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431"
} | It seems that all tests are passed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6304/comments | https://api.github.com/repos/huggingface/datasets/issues/6304/events | https://github.com/huggingface/datasets/pull/6304 | 1,945,913,521 | PR_kwDODunzps5c7-4q | 6,304 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smty2018",
"id": 74114936,
"login": "smty2018",
"node_id": "MDQ6VXNlcjc0MTE0OTM2",
"organizations_url": "https://api.github.com/users/smty2018/orgs",
"received_events_url": "https://api.github.com/users/smty2018/received_events",
"repos_url": "https://api.github.com/users/smty2018/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smty2018/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smty2018"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006678 / 0.011353 (-0.004675) | 0.004013 / 0.011008 (-0.006995) | 0.083372 / 0.038508 (0.044864) | 0.070339 / 0.023109 (0.047230) | 0.339026 / 0.275898 (0.063128) | 0.370945 / 0.323480 (0.047465) | 0.004050 / 0.007986 (-0.003935) | 0.003283 / 0.004328 (-0.001046) | 0.064956 / 0.004250 (0.060705) | 0.055427 / 0.037052 (0.018374) | 0.341787 / 0.258489 (0.083297) | 0.385030 / 0.293841 (0.091189) | 0.031791 / 0.128546 (-0.096755) | 0.008511 / 0.075646 (-0.067135) | 0.286538 / 0.419271 (-0.132734) | 0.052893 / 0.043533 (0.009360) | 0.338522 / 0.255139 (0.083383) | 0.371821 / 0.283200 (0.088622) | 0.023731 / 0.141683 (-0.117951) | 1.485857 / 1.452155 (0.033702) | 1.515218 / 1.492716 (0.022502) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232798 / 0.018006 (0.214792) | 0.446783 / 0.000490 (0.446293) | 0.007395 / 0.000200 (0.007195) | 0.000385 / 0.000054 (0.000330) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028866 / 0.037411 (-0.008545) | 0.081653 / 0.014526 (0.067127) | 0.094457 / 0.176557 (-0.082099) | 0.151761 / 0.737135 (-0.585375) | 0.095579 / 0.296338 (-0.200760) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.379926 / 0.215209 (0.164717) | 3.801839 / 2.077655 (1.724184) | 1.830302 / 1.504120 (0.326182) | 1.686912 / 1.541195 (0.145717) | 1.803418 / 1.468490 (0.334928) | 0.484431 / 4.584777 (-4.100346) | 3.592748 / 3.745712 (-0.152964) | 3.402578 / 5.269862 (-1.867284) | 2.043434 / 4.565676 (-2.522242) | 0.057274 / 0.424275 (-0.367001) | 0.007211 / 0.007607 (-0.000396) | 0.462611 / 0.226044 (0.236567) | 4.610703 / 2.268929 (2.341775) | 2.397668 / 55.444624 (-53.046956) | 2.149983 / 6.876477 (-4.726494) | 2.199100 / 2.142072 (0.057028) | 0.575883 / 4.805227 (-4.229344) | 0.133421 / 6.500664 (-6.367243) | 0.061168 / 0.075469 (-0.014301) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.246792 / 1.841788 (-0.594995) | 18.974385 / 8.074308 (10.900077) | 14.268859 / 10.191392 (4.077467) | 0.166340 / 0.680424 (-0.514084) | 0.018227 / 0.534201 (-0.515974) | 0.389646 / 0.579283 (-0.189637) | 0.418780 / 0.434364 (-0.015584) | 0.458063 / 0.540337 (-0.082275) | 0.635156 / 1.386936 (-0.751780) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006613 / 0.011353 (-0.004740) | 0.003977 / 0.011008 (-0.007031) | 0.064609 / 0.038508 (0.026101) | 0.070418 / 0.023109 (0.047308) | 0.395814 / 0.275898 (0.119916) | 0.424803 / 0.323480 (0.101323) | 0.005342 / 0.007986 (-0.002644) | 0.003252 / 0.004328 (-0.001076) | 0.065177 / 0.004250 (0.060927) | 0.055299 / 0.037052 (0.018247) | 0.403983 / 0.258489 (0.145494) | 0.438522 / 0.293841 (0.144681) | 0.032336 / 0.128546 (-0.096210) | 0.008524 / 0.075646 (-0.067122) | 0.071645 / 0.419271 (-0.347627) | 0.048137 / 0.043533 (0.004604) | 0.395170 / 0.255139 (0.140031) | 0.421727 / 0.283200 (0.138528) | 0.023028 / 0.141683 (-0.118655) | 1.500739 / 1.452155 (0.048584) | 1.568887 / 1.492716 (0.076170) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227542 / 0.018006 (0.209536) | 0.447882 / 0.000490 (0.447393) | 0.005416 / 0.000200 (0.005216) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032954 / 0.037411 (-0.004457) | 0.091994 / 0.014526 (0.077468) | 0.105957 / 0.176557 (-0.070600) | 0.158728 / 0.737135 (-0.578407) | 0.104734 / 0.296338 (-0.191605) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436275 / 0.215209 (0.221066) | 4.344864 / 2.077655 (2.267209) | 2.304949 / 1.504120 (0.800829) | 2.123963 / 1.541195 (0.582768) | 2.189099 / 1.468490 (0.720609) | 0.492662 / 4.584777 (-4.092115) | 3.633662 / 3.745712 (-0.112051) | 3.251338 / 5.269862 (-2.018524) | 2.061378 / 4.565676 (-2.504299) | 0.058100 / 0.424275 (-0.366175) | 0.007311 / 0.007607 (-0.000297) | 0.516227 / 0.226044 (0.290183) | 5.184228 / 2.268929 (2.915300) | 2.780343 / 55.444624 (-52.664281) | 2.423428 / 6.876477 (-4.453048) | 2.617371 / 2.142072 (0.475298) | 0.590455 / 4.805227 (-4.214772) | 0.131728 / 6.500664 (-6.368936) | 0.059994 / 0.075469 (-0.015475) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354920 / 1.841788 (-0.486868) | 19.427822 / 8.074308 (11.353514) | 15.289037 / 10.191392 (5.097645) | 0.170437 / 0.680424 (-0.509987) | 0.020242 / 0.534201 (-0.513959) | 0.394921 / 0.579283 (-0.184362) | 0.426447 / 0.434364 (-0.007917) | 0.468321 / 0.540337 (-0.072017) | 0.671052 / 1.386936 (-0.715884) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bade7af74437347a760830466eb74f7a8ce0d799 \"CML watermark\")\n"
] | "2023-10-16T19:10:39Z" | "2023-10-17T15:13:37Z" | "2023-10-17T15:04:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6304",
"merged_at": "2023-10-17T15:04:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6304"
} | Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6304/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4544/comments | https://api.github.com/repos/huggingface/datasets/issues/4544/events | https://github.com/huggingface/datasets/issues/4544 | 1,280,500,340 | I_kwDODunzps5MUuJ0 | 4,544 | [CI] seqeval installation fails sometimes on python 3.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [] | "2022-06-22T16:35:23Z" | "2022-06-23T10:13:44Z" | "2022-06-23T10:13:44Z" | MEMBER | null | null | null | The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.
The installation fails because of this error:
```
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|███████▌ | 10 kB 42.1 MB/s eta 0:00:01
|███████████████ | 20 kB 53.3 MB/s eta 0:00:01
|██████████████████████▌ | 30 kB 67.2 MB/s eta 0:00:01
|██████████████████████████████ | 40 kB 76.1 MB/s eta 0:00:01
|████████████████████████████████| 43 kB 10.0 MB/s
Preparing metadata (setup.py) ... - error
ERROR: Command errored out with exit status 1:
command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy
cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/
Complete output (22 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module>
'Programming Language :: Python :: Implementation :: PyPy'
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup
return distutils.core.setup(**attrs)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__
k: v for k, v in attrs.items()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__
self.finalize_options()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options
ep.load()(self, ep.name, value)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load
return self.resolve()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300
Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT
This could be caused by the latest updates of setuptools-scm | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4544/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4544/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4865/comments | https://api.github.com/repos/huggingface/datasets/issues/4865/events | https://github.com/huggingface/datasets/issues/4865 | 1,344,552,626 | I_kwDODunzps5QJD6y | 4,865 | Dataset Viewer issue for MoritzLaurer/multilingual_nli | {
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MoritzLaurer",
"id": 41862082,
"login": "MoritzLaurer",
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MoritzLaurer"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?",
"Thanks for your response. You are right, its now working well. I had waited for 30 min or so and refreshed several times and thought there was some other error. Yeah, a different error message sounds like a good idea to avoid confusion. ",
"I'm closing this issue then.",
"> @severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?\r\n\r\nYes, it's a known issue, and we're about to ship a better version"
] | "2022-08-19T14:55:20Z" | "2022-08-22T14:47:14Z" | "2022-08-22T06:13:20Z" | NONE | null | null | null | ### Link
_No response_
### Description
I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli
It displays the error:
```
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test
Do you know why the dataviewer is not working?
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4865/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4865/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1495/comments | https://api.github.com/repos/huggingface/datasets/issues/1495/events | https://github.com/huggingface/datasets/pull/1495 | 763,025,562 | MDExOlB1bGxSZXF1ZXN0NTM3NTE2ODE4 | 1,495 | Opus DGT added | {
"avatar_url": "https://avatars.githubusercontent.com/u/22396042?v=4",
"events_url": "https://api.github.com/users/rkc007/events{/privacy}",
"followers_url": "https://api.github.com/users/rkc007/followers",
"following_url": "https://api.github.com/users/rkc007/following{/other_user}",
"gists_url": "https://api.github.com/users/rkc007/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rkc007",
"id": 22396042,
"login": "rkc007",
"node_id": "MDQ6VXNlcjIyMzk2MDQy",
"organizations_url": "https://api.github.com/users/rkc007/orgs",
"received_events_url": "https://api.github.com/users/rkc007/received_events",
"repos_url": "https://api.github.com/users/rkc007/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rkc007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rkc007/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rkc007"
} | [] | closed | false | null | [] | null | [
"merging since the CI is fixed on master"
] | "2020-12-11T23:05:09Z" | "2020-12-17T14:38:41Z" | "2020-12-17T14:38:41Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1495.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1495",
"merged_at": "2020-12-17T14:38:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1495.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1495"
} | Dataset : http://opus.nlpl.eu/DGT.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1495/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5604/comments | https://api.github.com/repos/huggingface/datasets/issues/5604/events | https://github.com/huggingface/datasets/issues/5604 | 1,608,304,775 | I_kwDODunzps5f3MiH | 5,604 | Problems with downloading The Pile | {
"avatar_url": "https://avatars.githubusercontent.com/u/11065386?v=4",
"events_url": "https://api.github.com/users/sentialx/events{/privacy}",
"followers_url": "https://api.github.com/users/sentialx/followers",
"following_url": "https://api.github.com/users/sentialx/following{/other_user}",
"gists_url": "https://api.github.com/users/sentialx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sentialx",
"id": 11065386,
"login": "sentialx",
"node_id": "MDQ6VXNlcjExMDY1Mzg2",
"organizations_url": "https://api.github.com/users/sentialx/orgs",
"received_events_url": "https://api.github.com/users/sentialx/received_events",
"repos_url": "https://api.github.com/users/sentialx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sentialx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sentialx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sentialx"
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\datasets', download_config=DownloadConfig(resume_download=True))\r\n```\r\n\r\n",
"@mariosasko , I used your suggestion but its not saving anything , just stops and runs from the same point .\r\nbelow is the script to download and save on disk .\r\n\r\n```\r\nfrom datasets import load_dataset, DownloadConfig\r\n\r\n\r\n#load the Pile dataset from Hugging Face Datasets\r\n#dataset = load_dataset('the_pile')\r\ndataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n\r\n\r\n# save each file in the dataset to disk\r\nfor i, example in enumerate(dataset['train']):\r\n filename = f'pile_file_{i}.json'\r\n with open(filename, 'w') as f:\r\n f.write(str(example))\r\n\r\nprint(\"Finished saving Pile dataset files to disk.\")\r\n```\r\n",
"@mariosasko , it shows nothing in dataset folder\r\n\r\n```\r\n du -sh /mnt/nlp/hugging_face/*\r\n20K /mnt/nlp/hugging_face/datasets\r\n4.0K /mnt/nlp/hugging_face/download_pile.py\r\n```\r\n",
"@mariosasko \r\n\r\n```\r\nroot@d20f0ab8f4f8:/mnt/hugging_face# python3 download_pile.py\r\nNo config specified, defaulting to: the_pile/all\r\nDownloading and preparing dataset the_pile/all to /mnt/hugging_face/datasets/the_pile/all/0.0.0/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349...\r\nDownloading data files: 0%| | 0/3 [00:00<?, ?it/s]\r\n\r\n\r\n\r\n\r\n\r\nDownloading data: 70%|████████████████████████████████████████████████████████████████████▊ | 10.7G/15.2G [12:09<11:53, 6.36MB/s]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 15.2G/15.2G [22:15<00:00, 7.25MB/s]\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 15.2G/15.2G [46:17<00:00, 5.48MB/s]\r\nDownloading data: 40%|██████████████████████████████████████▏ | 6.07G/15.3G [50:49<1:17:02, 1.99MB/s]\r\nTraceback (most recent call last):██████████████████████████▊ | 6.07G/15.3G [50:49<25:35:23, 99.9kB/s]\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 444, in _error_catcher\r\n yield\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 567, in read\r\n data = self._fp_read(amt) if not fp_closed else b\"\"\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 525, in _fp_read\r\n data = self._fp.read(chunk_amt)\r\n File \"/usr/lib/python3.8/http/client.py\", line 459, in read\r\n n = self.readinto(b)\r\n File \"/usr/lib/python3.8/http/client.py\", line 503, in readinto\r\n n = self.fp.readinto(b)\r\n File \"/usr/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1241, in recv_into\r\n return self.read(nbytes, buffer)\r\n File \"/usr/lib/python3.8/ssl.py\", line 1099, in read\r\n return self._sslobj.read(len, buffer)\r\nConnectionResetError: [Errno 104] Connection reset by peer\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 816, in generate\r\n yield from self.raw.stream(chunk_size, decode_content=True)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 628, in stream\r\n data = self.read(amt=amt, decode_content=decode_content)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 593, in read\r\n raise IncompleteRead(self._fp_bytes_read, self.length_remaining)\r\n File \"/usr/lib/python3.8/contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"/usr/local/lib/python3.8/dist-packages/urllib3/response.py\", line 461, in _error_catcher\r\n raise ProtocolError(\"Connection broken: %r\" % e, e)\r\nurllib3.exceptions.ProtocolError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"download_pile.py\", line 6, in <module>\r\n dataset = load_dataset('the_pile', split='train', cache_dir='datasets', download_config=DownloadConfig(resume_download=True))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/load.py\", line 1782, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 872, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 1649, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 945, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/root/.cache/huggingface/modules/datasets_modules/datasets/the_pile/6fadc480ecb32470826cbf5900a9558b791ce55d5e9a0fdc8ad653e7b64bb349/the_pile.py\", line 192, in _split_generators\r\n data_dir = dl_manager.download(_DATA_URLS[self.config.name])\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 427, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 443, in map_nested\r\n mapped = [\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 444, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True, None))\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in _single_map_nested\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 363, in <listcomp>\r\n mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py\", line 346, in _single_map_nested\r\n return function(data_struct)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/download/download_manager.py\", line 453, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 182, in cached_path\r\n output_path = get_from_cache(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 575, in get_from_cache\r\n http_get(\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/utils/file_utils.py\", line 379, in http_get\r\n for chunk in response.iter_content(chunk_size=1024):\r\n File \"/usr/local/lib/python3.8/dist-packages/requests/models.py\", line 818, in generate\r\n raise ChunkedEncodingError(e)\r\nrequests.exceptions.ChunkedEncodingError: (\"Connection broken: ConnectionResetError(104, 'Connection reset by peer')\", ConnectionResetError(104, 'Connection reset by peer'))\r\n```\r\n",
"Users with slow internet speed are doomed (4MB/s). The dataset downloads fine at minimum speed 10MB/s.\n\nAlso, when the train splits were generated and then I removed the downloads folder to save up disk space, it started redownloading the whole dataset. Is there any way to use the already generated splits instead?",
"@sentialx @mariosasko , anytime on my above script , am I downloading and saving dataset correctly . Please suggest :)",
"@sentialx probably worth noting that `resume_download=True` doesn't directly save the dataset to disk, but instead just helps in resuming the dataset resume on interruption as @mariosasko mentions. resolving resumptions after a crash is [an open issue](https://github.com/huggingface/datasets/issues/5380) at the moment."
] | "2023-03-03T09:52:08Z" | "2023-10-14T02:15:52Z" | "2023-03-24T12:44:25Z" | NONE | null | null | null | ### Describe the bug
The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error.
![image](https://user-images.githubusercontent.com/11065386/222687870-ec5fcb65-84e8-467d-9593-4ad7bdac4d50.png)
Here are the downloaded files:
![image](https://user-images.githubusercontent.com/11065386/222688200-454c2288-49e5-4682-96e6-1eb69aca0852.png)
They should be all 14GB like here (https://the-eye.eu/public/AI/pile/train/).
Alternatively, can I somehow download the files by myself and use the datasets preparing script?
### Steps to reproduce the bug
dataset = load_dataset('the_pile', split='train', cache_dir='F:\datasets')
### Expected behavior
The files should be downloaded correctly.
### Environment info
- `datasets` version: 2.10.1
- Platform: Windows-10-10.0.22623-SP0
- Python version: 3.10.5
- PyArrow version: 9.0.0
- Pandas version: 1.4.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5604/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5049/comments | https://api.github.com/repos/huggingface/datasets/issues/5049/events | https://github.com/huggingface/datasets/pull/5049 | 1,392,361,381 | PR_kwDODunzps4_7zOY | 5,049 | Add `kwargs` to `Dataset.from_generator` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-09-30T12:24:27Z" | "2022-10-03T11:00:11Z" | "2022-10-03T10:58:15Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"merged_at": "2022-10-03T10:58:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049"
} | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5049/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4814/comments | https://api.github.com/repos/huggingface/datasets/issues/4814/events | https://github.com/huggingface/datasets/issues/4814 | 1,333,356,230 | I_kwDODunzps5PeWbG | 4,814 | Support CSV as metadata file format in AudioFolder/ImageFolder | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
}
] | null | [] | "2022-08-09T14:36:49Z" | "2022-08-31T11:59:08Z" | "2022-08-31T11:59:08Z" | CONTRIBUTOR | null | null | null | Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4814/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2014/comments | https://api.github.com/repos/huggingface/datasets/issues/2014/events | https://github.com/huggingface/datasets/pull/2014 | 825,916,531 | MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3 | 2,014 | more explicit method parameters | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [] | closed | false | null | [] | null | [] | "2021-03-09T13:18:29Z" | "2021-03-10T10:08:37Z" | "2021-03-10T10:08:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2014.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2014",
"merged_at": "2021-03-10T10:08:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2014.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2014"
} | re: #2009
not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2014/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4226/comments | https://api.github.com/repos/huggingface/datasets/issues/4226/events | https://github.com/huggingface/datasets/pull/4226 | 1,216,331,073 | PR_kwDODunzps420kAv | 4,226 | Add pearsonr mc, update functionality to match the original docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you @lhoestq!! :hugs: "
] | "2022-04-26T18:30:46Z" | "2022-05-03T17:09:24Z" | "2022-05-03T17:02:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4226",
"merged_at": "2022-05-03T17:02:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4226"
} | - adds pearsonr metric card
- adds ability to return p-value
- p-value was mentioned in the original docs as a return value, but there was no option to return it. I updated the _compute function slightly to have an option to return the p-value. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4226/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2611/comments | https://api.github.com/repos/huggingface/datasets/issues/2611/events | https://github.com/huggingface/datasets/pull/2611 | 940,307,053 | MDExOlB1bGxSZXF1ZXN0Njg2Mzk5MjU3 | 2,611 | More consistent naming | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | "2021-07-09T00:09:17Z" | "2021-07-13T17:13:19Z" | "2021-07-13T16:08:30Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2611.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2611",
"merged_at": "2021-07-13T16:08:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2611.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2611"
} | As per @stas00's suggestion in #2500, this PR inserts a space between the logo and the lib name (`🤗Datasets` -> `🤗 Datasets`) for consistency with the Transformers lib. Additionally, more consistent names are used for Datasets Hub, etc. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2611/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1895/comments | https://api.github.com/repos/huggingface/datasets/issues/1895/events | https://github.com/huggingface/datasets/issues/1895 | 809,630,271 | MDU6SXNzdWU4MDk2MzAyNzE= | 1,895 | Bug Report: timestamp[ns] not recognized | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/justin-yan",
"id": 7731709,
"login": "justin-yan",
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"organizations_url": "https://api.github.com/users/justin-yan/orgs",
"received_events_url": "https://api.github.com/users/justin-yan/received_events",
"repos_url": "https://api.github.com/users/justin-yan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/justin-yan"
} | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n",
"Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) only to convert it back into the arrow type (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L143, and https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!",
"The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ",
"OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L698) and `generate_from_arrow_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!",
"Yes you're totally right :)"
] | "2021-02-16T20:38:04Z" | "2021-02-19T18:27:11Z" | "2021-02-19T18:27:11Z" | CONTRIBUTOR | null | null | null | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp
It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method.
Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well!
```
$ pip list # only the relevant libraries/versions
datasets 1.2.1
pandas 1.0.3
pyarrow 3.0.0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1895/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2685/comments | https://api.github.com/repos/huggingface/datasets/issues/2685/events | https://github.com/huggingface/datasets/pull/2685 | 948,791,572 | MDExOlB1bGxSZXF1ZXN0NjkzNTgxNTk2 | 2,685 | Fix Blog Authorship Corpus dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Normally, I'm expecting errors from the validation of the README file... 😅 ",
"That is:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[blog_authorship_corpus]\r\n==== 1 failed, 3182 passed, 2763 skipped, 16 warnings in 201.23s (0:03:21) =====\r\n```",
"@lhoestq, apart from the dataset card, everything is OK with this PR: I tested it locally."
] | "2021-07-20T15:44:50Z" | "2021-07-21T13:11:58Z" | "2021-07-21T13:11:58Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2685",
"merged_at": "2021-07-21T13:11:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2685"
} | This PR:
- Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError`
- Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files
Close #2679. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2685/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5847/comments | https://api.github.com/repos/huggingface/datasets/issues/5847/events | https://github.com/huggingface/datasets/issues/5847 | 1,706,616,634 | I_kwDODunzps5luOc6 | 5,847 | Streaming IterableDataset not working with translation pipeline | {
"avatar_url": "https://avatars.githubusercontent.com/u/826841?v=4",
"events_url": "https://api.github.com/users/jlquinn/events{/privacy}",
"followers_url": "https://api.github.com/users/jlquinn/followers",
"following_url": "https://api.github.com/users/jlquinn/following{/other_user}",
"gists_url": "https://api.github.com/users/jlquinn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jlquinn",
"id": 826841,
"login": "jlquinn",
"node_id": "MDQ6VXNlcjgyNjg0MQ==",
"organizations_url": "https://api.github.com/users/jlquinn/orgs",
"received_events_url": "https://api.github.com/users/jlquinn/received_events",
"repos_url": "https://api.github.com/users/jlquinn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jlquinn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlquinn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jlquinn"
} | [] | open | false | null | [] | null | [
"I wasn't sure to file this against transformers or datasets.",
"[`KeyDataset`](https://github.com/huggingface/transformers/blob/7f8b909189547944617741d8d3c6c84504701693/src/transformers/pipelines/pt_utils.py#L296) doesn't support iterable datasets, so you either need to implement a version that does (and also indexing nested (translation) fields):\r\n\r\n```python\r\nfrom torch.utils.data import Dataset, IterableDataset\r\n\r\ndef build_key_fetcher(key: str):\r\n def _key_fetcher(item):\r\n for sub_key in key.split(\".\"):\r\n item = item[sub_key]\r\n return item\r\n return _key_fetcher\r\n\r\nclass KeyDataset(Dataset):\r\n def __new__(cls, dataset: Dataset, key: str):\r\n cls = _KeyIterableDataset if isinstance(dataset, IterableDataset) else _KeyMapDataset\r\n self = object.__new__(cls)\r\n self.dataset = dataset\r\n self.key = key\r\n self._key_fetcher = build_key_fetcher(key)\r\n return self\r\n\r\nclass _KeyMapDataset(KeyDataset):\r\n def __getitem__(self, i):\r\n return self._key_fetcher(self.dataset[i])\r\n \r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n\r\nclass _KeyIterableDataset(KeyDataset):\r\n def __iter__(self):\r\n for ex in self.dataset:\r\n yield self._key_fetcher(ex)\r\n\r\nks = KeyDataset(ds, \"translation.en\")\r\n```\r\n\r\nor use `IterableDataset`'s `map`:\r\n```python\r\ndef fetch_en_translation(ex):\r\n return {\"en\": ex[\"translation\"][\"en\"]}\r\nks = ds.map(fetch_en_translation, remove_columns=ds.column_names) \r\n```\r\n\r\ncc @sgugger: Perhaps the `KeyDataset` + PyTorch `IterableDataset` case should be supported by Transformers",
"@mariosasko The map snippet didn't quite work, but gave me enough of a clue to get it working. The following snippet does work:\r\n```\r\ndef en_translation(x):\r\n return {\"en\":x['translation']['en']}\r\nks = ds.map(en_translation, remove_columns=['translation'])\r\ntest=[]\r\nfor x in iter(ks):\r\n test.append(x['en'])\r\nxx= mt(test)\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nI tried just returning `x['translation']['en`]` in the helper function instead of the dict, but that didn't give me an iterator over strings that pipeline would work with either.\r\n\r\n\r\nThe snippet as is gives the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/pdb.py\", line 1704, in main\r\n pdb._runscript(mainpyfile)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/pdb.py\", line 1573, in _runscript\r\n self.run(statement)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/bdb.py\", line 580, in run\r\n exec(cmd, globals, locals)\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/jlquinn/models/hf/ende.t5.pipe.py\", line 1, in <module>\r\n from transformers import pipeline\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 335, in __call__\r\n return super().__call__(*args, **kwargs)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 138, in __call__\r\n result = super().__call__(*args, **kwargs)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1027, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/base.py\", line 1033, in run_single\r\n model_inputs = self.preprocess(inputs, **preprocess_params)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 287, in preprocess\r\n return super()._parse_and_tokenize(*args, truncation=truncation)\r\n File \"/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/text2text_generation.py\", line 100, in _parse_and_tokenize\r\n raise ValueError(\r\nValueError: `args[0]`: <datasets.iterable_dataset.IterableDataset object at 0x7f5fd38ef1c0> have the wrong format. The should be either of type `str` or type `list`\r\nUncaught exception. Entering post mortem debugging\r\nRunning 'cont' or 'step' will restart the program\r\n```\r\n",
"So perhaps there's no bug exactly, but I would love to see two things: 1) improve the documentation to better understand what's really getting returned. 2) update the example provided of using transformer pipeline with a dataset to include the oddball case that translation appears to be.",
"cc @Narsil ",
"Hi,\r\n\r\nfor the original snippet, the issue is that `streaming` datasets are not countable (they have no len) and therefore `KeyDataset` cannot work with them ( KeyDataset is a dataset and therefore requires a length).\r\n\r\nI modified slightly the original snippet to make it work:\r\n\r\n```python\r\nfrom transformers import pipeline\r\nfrom transformers.pipelines.pt_utils import KeyDataset\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(path=\"wmt14\", name=\"fr-en\", split=\"test\", streaming=True)\r\nbs = 1\r\nmt = pipeline(\r\n \"translation_en_to_fr\", model=\"hf-internal-testing/tiny-random-T5ForConditionalGeneration\", batch_size=bs\r\n)\r\n\r\n\r\ndef ks(ds):\r\n for item in ds:\r\n yield item[\"translation\"][\"en\"]\r\n\r\n\r\n# print(f\"{ks}\")\r\nxx = mt(ks(ds))\r\nfor x in xx:\r\n print(x)\r\n```\r\n\r\nThis is what the first example in the docs suggests to use (as it's the most flexible): https://huggingface.co/docs/transformers/v4.29.1/en/pipeline_tutorial#using-pipelines-on-a-dataset\r\n\r\n`KeyDataset` really exists only to get a `sized` dataset to work nicer with `tqdm` for instance.\r\n\r\n@sgugger should we update the docs to remove `KeyDataset` entirely ? (We can add a note to pass manually the length of the data to tqdm so that the progress bar option can still be easy to use ?)\r\n",
"Maybe moving `KeyDataset` later on in the guide and specify it's mostly for streaming then? Or is it also necessary for batch_size>1 (which is what the current doc implies)?",
"Hmm\r\n\r\nIterator (`yield`) :\r\n- Not countable\r\n- Super flexible\r\n- Cannot use `num_workers>1` (threading requires indexing at the correct location, iterators require to iterate in order,so each thread would iterate over the full thing being genuinely a bad idea)\r\n- Can batch\r\n- tqdm doesn't show a nice progress bar (it has no total)\r\n\r\nKeyDataset (Or any PyTorch like Dataset returning the correct object for the pipeline):\r\n- Countable\r\n- Less flexible (not applicable to datasets with streaming), can only work on single keys. But should be easy to read and write your own (like @mariosasko did)\r\n- Works with `num_workers > 1` (Every worker can fetch exactly what's needed)\r\n- Can batch \r\n- tqdm shows a nice progress bar\r\n\r\nIn the docs, if we update all the examples to use iterators, and include an example with\r\n\r\n```\r\nfor item in tqdm.tqdm(pipe(iterator(), total=len(dataset))))\r\n```\r\n\r\nWe can save the biggest feature that doesn't work out of the box with iterators which is the tqdm progress bar.\r\n\r\n`num_workers>1` we can mention it, but it tends to be an issues only on CPU intensive loads, like image (and maybe audio)\r\n"
] | "2023-05-11T21:52:38Z" | "2023-05-16T15:59:55Z" | null | NONE | null | null | null | ### Describe the bug
I'm trying to use a streaming dataset for translation inference to avoid downloading the training data.
I'm using a pipeline and a dataset, and following the guidance in the tutorial.
Instead I get an exception that IterableDataset has no len().
### Steps to reproduce the bug
CODE:
```
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
ds = load_dataset(path="wmt14", name="fr-en", split="test", streaming=True)
bs=1
mt = pipeline("translation_en_to_fr", model="t5-base", batch_size=bs)
#print(mt("hello")) THIS WORKS
ks = KeyDataset(ds, "translation")
print(f"{ks}")
xx= mt(ks)
for x in xx:
print(x)
```
RUN:
```
(watnlp) [jlquinn@bertdev01 hf]$ python ende.t5.pipe.py
2023-05-11 16:48:08.817572: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-11 16:48:08.821388: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-05-11 16:48:08.821407: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
<transformers.pipelines.pt_utils.KeyDataset object at 0x7f61ed5da9d0>
Traceback (most recent call last):
File "/home/jlquinn/models/hf/ende.t5.pipe.py", line 11, in <module>
for x in xx:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
index = self._next_index() # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
for idx in self.sampler:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 76, in __iter__
return iter(range(len(self.data_source)))
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 13, in __len__
return len(self.dataset)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 289, in __len__
return len(self.dataset)
TypeError: object of type 'IterableDataset' has no len()
```
### Expected behavior
I'm expecting french translations of the english test set to be printed.
### Environment info
Run on CPU with no GPU.
RHEL 8.7 x86_64
python 3.9.0
transformers 4.17.0
datasets 2.0.0
tokenizers 0.12.1
```
(watnlp) [jlquinn@bertdev01 hf]$ datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.0
- PyArrow version: 8.0.0
- Pandas version: 1.4.4
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5847/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4445/comments | https://api.github.com/repos/huggingface/datasets/issues/4445/events | https://github.com/huggingface/datasets/pull/4445 | 1,259,947,568 | PR_kwDODunzps45EjtA | 4,445 | Fix missing args in docstring of load_dataset_builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-03T13:55:50Z" | "2022-06-03T14:35:32Z" | "2022-06-03T14:27:09Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4445.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4445",
"merged_at": "2022-06-03T14:27:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4445.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4445"
} | Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other):
- https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4445/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4445/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5636/comments | https://api.github.com/repos/huggingface/datasets/issues/5636/events | https://github.com/huggingface/datasets/pull/5636 | 1,623,721,577 | PR_kwDODunzps5MAunR | 5,636 | Fix CI: ignore C901 ("some_func" is to complex) in `ruff` | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006529 / 0.011353 (-0.004824) | 0.004527 / 0.011008 (-0.006481) | 0.098051 / 0.038508 (0.059543) | 0.028058 / 0.023109 (0.004949) | 0.368543 / 0.275898 (0.092645) | 0.397126 / 0.323480 (0.073646) | 0.005072 / 0.007986 (-0.002913) | 0.003377 / 0.004328 (-0.000952) | 0.076867 / 0.004250 (0.072617) | 0.040121 / 0.037052 (0.003069) | 0.373422 / 0.258489 (0.114933) | 0.403969 / 0.293841 (0.110128) | 0.031485 / 0.128546 (-0.097061) | 0.011673 / 0.075646 (-0.063973) | 0.321837 / 0.419271 (-0.097434) | 0.042828 / 0.043533 (-0.000704) | 0.370391 / 0.255139 (0.115252) | 0.391737 / 0.283200 (0.108538) | 0.084764 / 0.141683 (-0.056919) | 1.463114 / 1.452155 (0.010959) | 1.527042 / 1.492716 (0.034325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200964 / 0.018006 (0.182958) | 0.403967 / 0.000490 (0.403477) | 0.002439 / 0.000200 (0.002239) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023531 / 0.037411 (-0.013880) | 0.097424 / 0.014526 (0.082899) | 0.104854 / 0.176557 (-0.071703) | 0.165682 / 0.737135 (-0.571453) | 0.109416 / 0.296338 (-0.186922) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431041 / 0.215209 (0.215832) | 4.326039 / 2.077655 (2.248384) | 2.085123 / 1.504120 (0.581003) | 1.922720 / 1.541195 (0.381525) | 2.006608 / 1.468490 (0.538118) | 0.703348 / 4.584777 (-3.881428) | 3.441516 / 3.745712 (-0.304196) | 1.875244 / 5.269862 (-3.394618) | 1.181341 / 4.565676 (-3.384336) | 0.083442 / 0.424275 (-0.340833) | 0.012966 / 0.007607 (0.005359) | 0.536047 / 0.226044 (0.310002) | 5.354856 / 2.268929 (3.085927) | 2.451064 / 55.444624 (-52.993560) | 2.076110 / 6.876477 (-4.800367) | 2.196507 / 2.142072 (0.054435) | 0.811196 / 4.805227 (-3.994032) | 0.152547 / 6.500664 (-6.348118) | 0.067978 / 0.075469 (-0.007491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196169 / 1.841788 (-0.645618) | 13.697234 / 8.074308 (5.622926) | 13.966652 / 10.191392 (3.775260) | 0.143735 / 0.680424 (-0.536688) | 0.016484 / 0.534201 (-0.517717) | 0.382349 / 0.579283 (-0.196934) | 0.401507 / 0.434364 (-0.032857) | 0.447297 / 0.540337 (-0.093041) | 0.529779 / 1.386936 (-0.857157) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006698 / 0.011353 (-0.004655) | 0.004608 / 0.011008 (-0.006400) | 0.076220 / 0.038508 (0.037712) | 0.027340 / 0.023109 (0.004231) | 0.344095 / 0.275898 (0.068197) | 0.374715 / 0.323480 (0.051235) | 0.004883 / 0.007986 (-0.003102) | 0.004658 / 0.004328 (0.000330) | 0.075381 / 0.004250 (0.071130) | 0.036099 / 0.037052 (-0.000953) | 0.340382 / 0.258489 (0.081893) | 0.383488 / 0.293841 (0.089647) | 0.031534 / 0.128546 (-0.097012) | 0.011735 / 0.075646 (-0.063912) | 0.085895 / 0.419271 (-0.333377) | 0.042226 / 0.043533 (-0.001306) | 0.340301 / 0.255139 (0.085162) | 0.366079 / 0.283200 (0.082879) | 0.088828 / 0.141683 (-0.052854) | 1.487880 / 1.452155 (0.035725) | 1.561318 / 1.492716 (0.068601) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226366 / 0.018006 (0.208360) | 0.408934 / 0.000490 (0.408444) | 0.000396 / 0.000200 (0.000196) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024521 / 0.037411 (-0.012891) | 0.100167 / 0.014526 (0.085641) | 0.106480 / 0.176557 (-0.070077) | 0.156377 / 0.737135 (-0.580758) | 0.111709 / 0.296338 (-0.184630) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436138 / 0.215209 (0.220928) | 4.370919 / 2.077655 (2.293265) | 2.066402 / 1.504120 (0.562282) | 1.862157 / 1.541195 (0.320962) | 1.920701 / 1.468490 (0.452211) | 0.695517 / 4.584777 (-3.889260) | 3.435558 / 3.745712 (-0.310154) | 1.864000 / 5.269862 (-3.405861) | 1.164134 / 4.565676 (-3.401543) | 0.083006 / 0.424275 (-0.341269) | 0.012751 / 0.007607 (0.005144) | 0.535405 / 0.226044 (0.309360) | 5.368530 / 2.268929 (3.099602) | 2.494197 / 55.444624 (-52.950427) | 2.161370 / 6.876477 (-4.715107) | 2.180345 / 2.142072 (0.038272) | 0.808076 / 4.805227 (-3.997151) | 0.151891 / 6.500664 (-6.348773) | 0.067643 / 0.075469 (-0.007826) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334245 / 1.841788 (-0.507543) | 14.112805 / 8.074308 (6.038497) | 14.152303 / 10.191392 (3.960911) | 0.153492 / 0.680424 (-0.526932) | 0.016542 / 0.534201 (-0.517659) | 0.376013 / 0.579283 (-0.203270) | 0.386528 / 0.434364 (-0.047836) | 0.436461 / 0.540337 (-0.103876) | 0.519278 / 1.386936 (-0.867658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce1d1076fc55ac49277398304e551f0b56c3c9e2 \"CML watermark\")\n"
] | "2023-03-14T15:29:11Z" | "2023-03-14T16:37:06Z" | "2023-03-14T16:29:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5636.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5636",
"merged_at": "2023-03-14T16:29:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5636.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5636"
} | idk if I should have added this ignore to `ruff` too, but I added :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5636/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2319/comments | https://api.github.com/repos/huggingface/datasets/issues/2319/events | https://github.com/huggingface/datasets/issues/2319 | 876,251,376 | MDU6SXNzdWU4NzYyNTEzNzY= | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | {
"avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4",
"events_url": "https://api.github.com/users/sgraaf/events{/privacy}",
"followers_url": "https://api.github.com/users/sgraaf/followers",
"following_url": "https://api.github.com/users/sgraaf/following{/other_user}",
"gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgraaf",
"id": 8904453,
"login": "sgraaf",
"node_id": "MDQ6VXNlcjg5MDQ0NTM=",
"organizations_url": "https://api.github.com/users/sgraaf/orgs",
"received_events_url": "https://api.github.com/users/sgraaf/received_events",
"repos_url": "https://api.github.com/users/sgraaf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgraaf"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.",
"Awesome, thank you. 😃 ",
"@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`."
] | "2021-05-05T09:22:52Z" | "2021-05-05T10:57:31Z" | "2021-05-05T10:50:55Z" | NONE | null | null | null | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```
## Expected results
Anything but an error, really.
## Actual results
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
Downloading: 14.7kB [00:00, 4.91MB/s]
Downloading: 3.07MB [00:00, 32.6MB/s]
Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare
self._download_and_prepare(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split
for key, record in utils.tqdm(
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__
for obj in iterable:
File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples
for line in f:
File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined>
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
- Datasets: 1.6.2
- Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)]
- Platform: Windows-10-10.0.19041-SP0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2319/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3936/comments | https://api.github.com/repos/huggingface/datasets/issues/3936/events | https://github.com/huggingface/datasets/pull/3936 | 1,170,713,473 | PR_kwDODunzps40hE-P | 3,936 | Fix Wikipedia version and re-add tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3936). All of your documentation changes will be reflected on that endpoint."
] | "2022-03-16T08:48:04Z" | "2022-03-16T17:04:07Z" | "2022-03-16T17:04:05Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3936",
"merged_at": "2022-03-16T17:04:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3936"
} | To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301":
- de
- en
- fr
- frr
- it
- simple
These pre-processed data can be accessed, e.g.:
```python
ds = load_dataset("wikipedia", "20220301.frr", split="train")
```
The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3936/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3485/comments | https://api.github.com/repos/huggingface/datasets/issues/3485/events | https://github.com/huggingface/datasets/issues/3485 | 1,089,027,581 | I_kwDODunzps5A6T39 | 3,485 | skip columns which cannot set to specific format when set_format | {
"avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4",
"events_url": "https://api.github.com/users/tshu-w/events{/privacy}",
"followers_url": "https://api.github.com/users/tshu-w/followers",
"following_url": "https://api.github.com/users/tshu-w/following{/other_user}",
"gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tshu-w",
"id": 13161779,
"login": "tshu-w",
"node_id": "MDQ6VXNlcjEzMTYxNzc5",
"organizations_url": "https://api.github.com/users/tshu-w/orgs",
"received_events_url": "https://api.github.com/users/tshu-w/received_events",
"repos_url": "https://api.github.com/users/tshu-w/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tshu-w"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns",
"Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned."
] | "2021-12-27T07:19:55Z" | "2021-12-27T09:07:07Z" | "2021-12-27T09:07:07Z" | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns.
**Describe the solution you'd like**
skip columns which cannot set to specific format when set_format instead of raise an error.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3485/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4394/comments | https://api.github.com/repos/huggingface/datasets/issues/4394/events | https://github.com/huggingface/datasets/issues/4394 | 1,245,221,657 | I_kwDODunzps5KOJMZ | 4,394 | trainer became extremely slow after reload dataset by `load_from_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/conan1024hao",
"id": 50416856,
"login": "conan1024hao",
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/conan1024hao"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!",
"Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `trainer.py`, but the speed didn't become faster.",
"I changed\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\"\r\n )\r\n```\r\nto\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\", keep_in_memory=True\r\n )\r\n```\r\nand obtained normal speed. It's seems that the problem is on the os's speed limit.",
"Hi ! Currently `save_to_disk` saves one big Arrow file, which causes some slow downs. This has been discussed in #3735 and we'll implement sharding pretty soon to solve this\r\n\r\nFor now you can try splitting and saving your dataset in several Arrow files. Then you can load them one by one and use `concatenate_datasets` to have your big dataset again and hopefully with a better speed"
] | "2022-05-23T14:04:37Z" | "2022-06-06T16:08:01Z" | null | NONE | null | null | null | ## Describe the bug
Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card.
Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)**
## Steps to reproduce the bug
```python
tokenized_datasets.save_to_disk(
"/pathto/dataset"
)
tokenized_datasets = load_from_disk(
"/pathto/dataset"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"] if training_args.do_train else None,
eval_dataset=tokenized_datasets["validation"]
if training_args.do_eval
else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
## Expected results
Without the save and reload process, I only need about one day to run the whole script with one A100 card.
## Actual results
```
[INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training *****
[INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165
[INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5
[INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540
0%| | 1/567540 [00:09<1544:49:04, 9.80s/it]
0%| | 2/567540 [00:17<1320:00:17, 8.37s/it]
0%| | 3/567540 [00:26<1393:10:17, 8.84s/it]
0%| | 4/567540 [00:34<1344:56:33, 8.53s/it]
0%| | 5/567540 [00:43<1359:36:12, 8.62s/it]
```
## Environment info
```
torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.18.0
datasets 2.2.2
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4394/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5296/comments | https://api.github.com/repos/huggingface/datasets/issues/5296/events | https://github.com/huggingface/datasets/issues/5296 | 1,464,553,580 | I_kwDODunzps5XS1Bs | 5,296 | Bug in xjoin with Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [] | "2022-11-25T13:29:33Z" | "2022-11-29T08:05:13Z" | "2022-11-29T08:05:13Z" | MEMBER | null | null | null | Currently, `xjoin` function has a bug with local Windows pathnames: instead of returning the OS-dependent join pathname, it always returns it in POSIX format.
```python
from datasets.download.streaming_download_manager import xjoin
path = xjoin("C:\\Users\\USERNAME", "filename.txt")
```
Join path should be:
```python
"C:\\Users\\USERNAME\\filename.txt"
```
However it is:
```python
"C:/Users/USERNAME/filename.txt"
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5296/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4893/comments | https://api.github.com/repos/huggingface/datasets/issues/4893/events | https://github.com/huggingface/datasets/issues/4893 | 1,350,655,674 | I_kwDODunzps5QgV66 | 4,893 | Oversampling strategy for iterable datasets in `interleave_datasets` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ylacombe",
"id": 52246514,
"login": "ylacombe",
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ylacombe"
}
] | null | [
"Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n",
"Great @ylacombe thanks ! I'm assigning you this issue",
"Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)",
"Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https://github.com/ylacombe/datasets/commit/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ",
"Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`",
"Hi @ylacombe let us know if we can help with anything :)",
"Hi @lhoestq, I've finally made some advances in the matter. I've modified the `IterableDataset` behavior so that it aligns with the `Dataset` behavior as we have discussed. The documentation has been dealt with too. \r\nIt works as expected on my examples. However I'm having trouble figuring out how to test `interleave_datasets` on `test_iterable_datasets.py` as I have never worked with pytest. Could you help me on that or give me some indications? \r\n",
"Thanks @ylacombe :)\r\n\r\nUsing the `pytest` command, you can run all the functions in a python file that start with \"test_*\" and make sure they return not errors:\r\n```\r\npytest tests/test_iterable_dataset.py\r\n```\r\n\r\nIn our case it can be nice to define a `test_interleave_datasets_with_oversampling` function. This function can contain the code example that we mentioned earlier in this github issue to make sure it works as expected.",
"Resolved via #5036."
] | "2022-08-25T10:06:55Z" | "2022-10-03T12:37:46Z" | "2022-10-03T12:37:46Z" | MEMBER | null | null | null | In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4893/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3407/comments | https://api.github.com/repos/huggingface/datasets/issues/3407/events | https://github.com/huggingface/datasets/pull/3407 | 1,074,502,225 | PR_kwDODunzps4vjyrB | 3,407 | Use max number of data files to infer module | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Cool thanks :) Feel free to merge if it's all good for you"
] | "2021-12-08T14:58:43Z" | "2021-12-14T17:08:42Z" | "2021-12-14T17:08:42Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3407.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3407",
"merged_at": "2021-12-14T17:08:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3407.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3407"
} | When inferring the module for datasets without script, set a maximum number of iterations over data files.
This PR fixes the issue of taking too long when hundred of data files present.
Please, feel free to agree on both numbers:
```
# Datasets without script
DATA_FILES_MAX_NUMBER = 10
ARCHIVED_DATA_FILES_MAX_NUMBER = 5
```
Fix #3404. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3407/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2161/comments | https://api.github.com/repos/huggingface/datasets/issues/2161/events | https://github.com/huggingface/datasets/issues/2161 | 849,127,041 | MDU6SXNzdWU4NDkxMjcwNDE= | 2,161 | any possibility to download part of large datasets only? | {
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dorost1234",
"id": 79165106,
"login": "dorost1234",
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dorost1234"
} | [] | closed | false | null | [] | null | [
"Not yet but it’s on the short/mid-term roadmap (requested by many indeed).",
"oh, great, really awesome feature to have, thank you very much for the great, fabulous work",
"We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)",
"thanks a lot Quentin, this would be really really a great feature to have\n\nOn Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> We'll work on dataset streaming soon. This should allow you to only load\n> the examples you need ;)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2161#issuecomment-814791922>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMROD62QAKIJMAKWISTTHQWBVANCNFSM42IUI5JQ>\n> .\n>\n",
"Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error:\r\n\r\n```\r\n>>> dataset2 = load_dataset(\"amazon_us_reviews\", \"Pet_Products_v1_00\", split='train', streaming=True)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-21-1eedab26cff1> in <module>()\r\n----> 1 en_dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n\r\n3 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 339 if value is not None:\r\n 340 if not hasattr(builder_config, key):\r\n--> 341 raise ValueError(f\"BuilderConfig {builder_config} doesn't have a '{key}' key.\")\r\n 342 setattr(builder_config, key, value)\r\n 343 \r\n\r\nValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key.\r\n```\r\n\r\nUPDATE: Managed to get streaming working by building from source and installing the additional `datasets[streaming]` package:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets.git\r\n!pip install datasets[streaming]\r\n```",
"Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)"
] | "2021-04-02T10:06:46Z" | "2022-10-05T13:26:51Z" | "2022-10-05T13:26:51Z" | NONE | null | null | null | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2161/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3092/comments | https://api.github.com/repos/huggingface/datasets/issues/3092/events | https://github.com/huggingface/datasets/pull/3092 | 1,027,260,383 | PR_kwDODunzps4tPj6e | 3,092 | Fix JNLBA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [] | closed | false | null | [] | null | [
"Fix #3089.",
"@albertvillanova all tests are passing now. Either you or @lhoestq can review it!"
] | "2021-10-15T09:31:14Z" | "2022-07-10T14:36:49Z" | "2021-10-22T08:23:57Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3092.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3092",
"merged_at": "2021-10-22T08:23:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3092.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3092"
} | As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link.
I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
` error. I'll try to add dummy data manually. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3092/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5495/comments | https://api.github.com/repos/huggingface/datasets/issues/5495/events | https://github.com/huggingface/datasets/issues/5495 | 1,566,803,452 | I_kwDODunzps5dY4X8 | 5,495 | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"events_url": "https://api.github.com/users/dwyatte/events{/privacy}",
"followers_url": "https://api.github.com/users/dwyatte/followers",
"following_url": "https://api.github.com/users/dwyatte/following{/other_user}",
"gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dwyatte",
"id": 2512762,
"login": "dwyatte",
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"organizations_url": "https://api.github.com/users/dwyatte/orgs",
"received_events_url": "https://api.github.com/users/dwyatte/received_events",
"repos_url": "https://api.github.com/users/dwyatte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dwyatte"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
"Hi! This is indeed a bug in our zero-copy logic.\r\n\r\nTo fix it, instead of the line:\r\nhttps://github.com/huggingface/datasets/blob/7cfac43b980ab9e4a69c2328f085770996323005/src/datasets/features/features.py#L702\r\n\r\nwe should have:\r\n```python\r\nreturn pa.types.is_primitive(pa_type) and not (pa.types.is_boolean(pa_type) or pa.types.is_temporal(pa_type))\r\n```",
"@mariosasko submitted a small PR [here](https://github.com/huggingface/datasets/pull/5504)"
] | "2023-02-01T20:47:33Z" | "2023-02-08T14:33:19Z" | "2023-02-08T14:33:19Z" | CONTRIBUTOR | null | null | null | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected.
### Steps to reproduce the bug
```python
import numpy as np
import pandas as pd
from datasets import Dataset
df = pd.DataFrame(np.random.rand(2, 1), columns=["x"])
# df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine
df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"])
df.to_parquet("test.pq")
ds = Dataset.from_parquet("test.pq")
tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True)
```
```
ArrowInvalid Traceback (most recent call last)
Cell In[1], line 12
8 df.to_parquet("test.pq")
11 ds = Dataset.from_parquet("test.pq")
---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True)
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers)
407 dataset = self
409 # TODO(Matt, QL): deprecate the retention of label_ids and label
--> 411 output_signature, columns_to_np_types = dataset._get_output_signature(
412 dataset,
413 collate_fn=collate_fn,
414 collate_fn_args=collate_fn_args,
415 cols_to_retain=cols_to_retain,
416 batch_size=batch_size if drop_remainder else None,
417 )
419 if "labels" in output_signature:
420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns:
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches)
252 for _ in range(num_test_batches):
253 indices = sample(range(len(dataset)), test_batch_size)
--> 254 test_batch = dataset[indices]
255 if cols_to_retain is not None:
256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain}
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key)
2588 def __getitem__(self, key): # noqa: F811
2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2590 return self._getitem(
2591 key,
2592 )
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs)
2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs)
2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2575 formatted_output = format_table(
2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2577 )
2578 return formatted_output
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns)
632 python_formatter = PythonFormatter(features=None)
633 if format_columns is None:
--> 634 return formatter(pa_table, query_type=query_type)
635 elif query_type == "column":
636 if key in format_columns:
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type)
408 return self.format_column(pa_table)
409 elif query_type == "batch":
--> 410 return self.format_batch(pa_table)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table)
77 def format_batch(self, pa_table: pa.Table) -> Mapping:
---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table)
79 batch = self.python_features_decoder.decode_batch(batch)
80 batch = self.recursive_tensorize(batch)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
--> 185 array: List = [
186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
185 array: List = [
--> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy()
File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True
```
### Expected behavior
I think there are two potential issues/fixes
1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here)
2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable)
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5495/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3961/comments | https://api.github.com/repos/huggingface/datasets/issues/3961/events | https://github.com/huggingface/datasets/issues/3961 | 1,173,223,086 | I_kwDODunzps5F7fau | 3,961 | Scores from Index at extra positions are not filtered out | {
"avatar_url": "https://avatars.githubusercontent.com/u/36671559?v=4",
"events_url": "https://api.github.com/users/vishalsrao/events{/privacy}",
"followers_url": "https://api.github.com/users/vishalsrao/followers",
"following_url": "https://api.github.com/users/vishalsrao/following{/other_user}",
"gists_url": "https://api.github.com/users/vishalsrao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vishalsrao",
"id": 36671559,
"login": "vishalsrao",
"node_id": "MDQ6VXNlcjM2NjcxNTU5",
"organizations_url": "https://api.github.com/users/vishalsrao/orgs",
"received_events_url": "https://api.github.com/users/vishalsrao/received_events",
"repos_url": "https://api.github.com/users/vishalsrao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vishalsrao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vishalsrao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vishalsrao"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi! Yes, that makes sense! Would you like to submit a PR to fix this?",
"Created PR https://github.com/huggingface/datasets/pull/3971"
] | "2022-03-18T06:13:23Z" | "2022-04-12T14:41:58Z" | "2022-04-12T14:41:58Z" | CONTRIBUTOR | null | null | null | If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too.
Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3961/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3995/comments | https://api.github.com/repos/huggingface/datasets/issues/3995/events | https://github.com/huggingface/datasets/pull/3995 | 1,178,232,623 | PR_kwDODunzps404054 | 3,995 | Close `PIL.Image` file handler in `Image.decode_example` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-23T14:51:48Z" | "2022-03-23T18:24:52Z" | "2022-03-23T18:19:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3995",
"merged_at": "2022-03-23T18:19:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3995"
} | Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error.
To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926.
Fix #3985
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3995/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4921/comments | https://api.github.com/repos/huggingface/datasets/issues/4921/events | https://github.com/huggingface/datasets/pull/4921 | 1,357,609,003 | PR_kwDODunzps4-JVFV | 4,921 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-08-31T16:52:27Z" | "2022-09-22T14:34:11Z" | "2022-09-01T05:04:53Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4921",
"merged_at": "2022-09-01T05:04:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4921"
} | Fix missing tags in dataset cards:
- eraser_multi_rc
- hotpot_qa
- metooma
- movie_rationales
- qanta
- quora
- quoref
- race
- ted_hrlr
- ted_talks_iwslt
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4921/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5254/comments | https://api.github.com/repos/huggingface/datasets/issues/5254/events | https://github.com/huggingface/datasets/pull/5254 | 1,452,600,088 | PR_kwDODunzps5DE47u | 5,254 | typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | "2022-11-17T02:39:57Z" | "2022-11-18T10:53:45Z" | "2022-11-18T10:53:45Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5254",
"merged_at": "2022-11-18T10:53:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5254"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5254/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4662/comments | https://api.github.com/repos/huggingface/datasets/issues/4662/events | https://github.com/huggingface/datasets/pull/4662 | 1,298,845,369 | PR_kwDODunzps47GTEc | 4,662 | Fix: conll2003 - fix empty example | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-07-08T10:49:13Z" | "2022-07-08T14:14:53Z" | "2022-07-08T14:02:42Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4662.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4662",
"merged_at": "2022-07-08T14:02:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4662.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4662"
} | As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4662/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2136/comments | https://api.github.com/repos/huggingface/datasets/issues/2136/events | https://github.com/huggingface/datasets/pull/2136 | 843,492,015 | MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5 | 2,136 | fix dialogue action slot name and value | {
"avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4",
"events_url": "https://api.github.com/users/adamlin120/events{/privacy}",
"followers_url": "https://api.github.com/users/adamlin120/followers",
"following_url": "https://api.github.com/users/adamlin120/following{/other_user}",
"gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/adamlin120",
"id": 31605305,
"login": "adamlin120",
"node_id": "MDQ6VXNlcjMxNjA1MzA1",
"organizations_url": "https://api.github.com/users/adamlin120/orgs",
"received_events_url": "https://api.github.com/users/adamlin120/received_events",
"repos_url": "https://api.github.com/users/adamlin120/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions",
"type": "User",
"url": "https://api.github.com/users/adamlin120"
} | [] | closed | false | null | [] | null | [] | "2021-03-29T15:34:13Z" | "2021-03-31T12:48:02Z" | "2021-03-31T12:48:01Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2136",
"merged_at": "2021-03-31T12:48:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2136"
} | fix #2128 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1587/comments | https://api.github.com/repos/huggingface/datasets/issues/1587/events | https://github.com/huggingface/datasets/pull/1587 | 768,929,877 | MDExOlB1bGxSZXF1ZXN0NTQxMjAwMDk3 | 1,587 | Add nq_open question answering dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/28673745?v=4",
"events_url": "https://api.github.com/users/Nilanshrajput/events{/privacy}",
"followers_url": "https://api.github.com/users/Nilanshrajput/followers",
"following_url": "https://api.github.com/users/Nilanshrajput/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilanshrajput/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nilanshrajput",
"id": 28673745,
"login": "Nilanshrajput",
"node_id": "MDQ6VXNlcjI4NjczNzQ1",
"organizations_url": "https://api.github.com/users/Nilanshrajput/orgs",
"received_events_url": "https://api.github.com/users/Nilanshrajput/received_events",
"repos_url": "https://api.github.com/users/Nilanshrajput/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nilanshrajput/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilanshrajput/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nilanshrajput"
} | [] | closed | false | null | [] | null | [
"@SBrandeis all checks passing"
] | "2020-12-16T14:22:08Z" | "2020-12-17T16:07:10Z" | "2020-12-17T16:07:10Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1587",
"merged_at": "2020-12-17T16:07:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1587"
} | this is pr is a copy of #1506 due to messed up git history in that pr. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1587/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1798/comments | https://api.github.com/repos/huggingface/datasets/issues/1798/events | https://github.com/huggingface/datasets/pull/1798 | 797,766,818 | MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1 | 1,798 | Add Arabic sarcasm dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mapmeld",
"id": 643918,
"login": "mapmeld",
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mapmeld"
} | [] | closed | false | null | [] | null | [
"@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data"
] | "2021-01-31T17:38:55Z" | "2021-02-10T20:39:13Z" | "2021-02-03T10:35:54Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"merged_at": "2021-02-03T10:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1798"
} | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1798/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4721/comments | https://api.github.com/repos/huggingface/datasets/issues/4721/events | https://github.com/huggingface/datasets/issues/4721 | 1,310,253,552 | I_kwDODunzps5OGOHw | 4,721 | PyArrow Dataset error when calling `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/16828657?v=4",
"events_url": "https://api.github.com/users/piraka9011/events{/privacy}",
"followers_url": "https://api.github.com/users/piraka9011/followers",
"following_url": "https://api.github.com/users/piraka9011/following{/other_user}",
"gists_url": "https://api.github.com/users/piraka9011/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/piraka9011",
"id": 16828657,
"login": "piraka9011",
"node_id": "MDQ6VXNlcjE2ODI4NjU3",
"organizations_url": "https://api.github.com/users/piraka9011/orgs",
"received_events_url": "https://api.github.com/users/piraka9011/received_events",
"repos_url": "https://api.github.com/users/piraka9011/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/piraka9011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/piraka9011/subscriptions",
"type": "User",
"url": "https://api.github.com/users/piraka9011"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi ! It looks like a bug in `pyarrow`. If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nTo achieve that you can try to lower the value of `max_shard_size` and also don't use `map` before `push_to_hub`.\r\n\r\nDo you have a minimum reproducible example that we can share with the Arrow team for further debugging ?",
"> If you manage to end up with only one chunk per parquet file it should workaround this issue.\r\n\r\nYup, I did not encounter this bug when I was testing my script with a slice of <1000 samples for my dataset.\r\n\r\n> Do you have a minimum reproducible example...\r\n\r\nNot sure if I can get more minimal than the script I shared above. Are you asking for a sample json file?\r\nJust generate a random manifest list, I can add that to the above script if that's what you mean?\r\n",
"Actually this is probably linked to this open issue: https://issues.apache.org/jira/browse/ARROW-5030.\r\n\r\nsetting `max_shard_size=\"2GB\"` should do the job (or `max_shard_size=\"1GB\"` if you want to be on the safe side, especially given that there can be some variance in the shard sizes if the dataset is not evenly distributed)"
] | "2022-07-20T01:16:03Z" | "2022-07-22T14:11:47Z" | null | NONE | null | null | null | ## Describe the bug
I am fine tuning a wav2vec2 model following the script here using my own dataset: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
Loading my Audio dataset from the hub which was originally generated from disk results in the following PyArrow error:
```sh
File "/home/ubuntu/w2v2/run_speech_recognition_ctc.py", line 227, in main
raw_datasets = load_dataset(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/load.py", line 1679, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 1268, in _prepare_split
for key, table in logging.tqdm(
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1309, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
## Steps to reproduce the bug
I created a dataset from a JSON lines manifest of `audio_filepath`, `text`, and `duration`.
When creating the dataset, I do something like this:
```python
import json
from datasets import Dataset, Audio
# manifest_lines is a list of dicts w/ "audio_filepath", "duration", and "text
for line in manifest_lines:
line = line.strip()
if line:
line_dict = json.loads(line)
manifest_dict["audio"].append(f"{root_path}/{line_dict['audio_filepath']}")
manifest_dict["duration"].append(line_dict["duration"])
manifest_dict["transcription"].append(line_dict["text"])
# Create a HF dataset
dataset = Dataset.from_dict(manifest_dict).cast_column(
"audio", Audio(sampling_rate=16_000),
)
# From the docs for saving to disk
# https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.save_to_disk
def read_audio_file(example):
with open(example["audio"]["path"], "rb") as f:
return {"audio": {"bytes": f.read()}}
dataset = dataset.map(read_audio_file, num_proc=70)
dataset.save_to_disk(f"/audio-data/hf/{artifact_name}")
dataset.push_to_hub(f"{org-name}/{artifact_name}", max_shard_size="5GB", private=True)
```
Then when I call `load_dataset()` in my training script, with the same dataset I generated above, and download from the huggingface hub I get the above stack trace.
I am able to load the dataset fine if I use `load_from_disk()`.
## Expected results
`load_dataset()` should behave just like `load_from_disk()` and not cause any errors.
## Actual results
See above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
I am using the `huggingface/transformers-pytorch-gpu:latest` image
- `datasets` version: 2.3.0
- Platform: Docker/Ubuntu 20.04
- Python version: 3.8
- PyArrow version: 8.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4721/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1693/comments | https://api.github.com/repos/huggingface/datasets/issues/1693/events | https://github.com/huggingface/datasets/pull/1693 | 780,268,595 | MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx | 1,693 | Fix reuters metadata parsing errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbragg",
"id": 2238344,
"login": "jbragg",
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"organizations_url": "https://api.github.com/users/jbragg/orgs",
"received_events_url": "https://api.github.com/users/jbragg/received_events",
"repos_url": "https://api.github.com/users/jbragg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbragg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbragg"
} | [] | closed | false | null | [] | null | [] | "2021-01-06T08:26:03Z" | "2021-01-07T23:53:47Z" | "2021-01-07T14:01:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1693",
"merged_at": "2021-01-07T14:01:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1693"
} | Was missing the last entry in each metadata category | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1693/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1973/comments | https://api.github.com/repos/huggingface/datasets/issues/1973/events | https://github.com/huggingface/datasets/issues/1973 | 820,077,312 | MDU6SXNzdWU4MjAwNzczMTI= | 1,973 | Question: what gets stored in the datasets cache and why is it so huge? | {
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ioana-blue",
"id": 17202292,
"login": "ioana-blue",
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ioana-blue"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.",
"Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.",
"Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ",
"And to clarify, it's not memory, it's disk space. Thank you!",
"Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```",
"Thanks for the tip, this is useful. ",
"Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.",
"Thank you!"
] | "2021-03-02T14:35:53Z" | "2021-03-30T14:03:59Z" | "2021-03-16T09:44:00Z" | NONE | null | null | null | I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1973/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5978/comments | https://api.github.com/repos/huggingface/datasets/issues/5978/events | https://github.com/huggingface/datasets/pull/5978 | 1,770,187,053 | PR_kwDODunzps5Tru2_ | 5,978 | Release: 2.13.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006173 / 0.011353 (-0.005180) | 0.003773 / 0.011008 (-0.007235) | 0.099499 / 0.038508 (0.060991) | 0.037918 / 0.023109 (0.014809) | 0.321329 / 0.275898 (0.045431) | 0.379739 / 0.323480 (0.056259) | 0.004664 / 0.007986 (-0.003322) | 0.002943 / 0.004328 (-0.001385) | 0.077759 / 0.004250 (0.073509) | 0.055271 / 0.037052 (0.018219) | 0.329428 / 0.258489 (0.070939) | 0.378731 / 0.293841 (0.084890) | 0.027737 / 0.128546 (-0.100810) | 0.008566 / 0.075646 (-0.067081) | 0.313220 / 0.419271 (-0.106052) | 0.047101 / 0.043533 (0.003568) | 0.316211 / 0.255139 (0.061072) | 0.341826 / 0.283200 (0.058626) | 0.020838 / 0.141683 (-0.120845) | 1.550064 / 1.452155 (0.097909) | 1.706518 / 1.492716 (0.213801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203093 / 0.018006 (0.185087) | 0.425345 / 0.000490 (0.424856) | 0.004800 / 0.000200 (0.004600) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024590 / 0.037411 (-0.012821) | 0.098115 / 0.014526 (0.083589) | 0.108274 / 0.176557 (-0.068282) | 0.170804 / 0.737135 (-0.566332) | 0.110560 / 0.296338 (-0.185778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425251 / 0.215209 (0.210042) | 4.239075 / 2.077655 (2.161421) | 1.955601 / 1.504120 (0.451481) | 1.774796 / 1.541195 (0.233602) | 1.826641 / 1.468490 (0.358150) | 0.558777 / 4.584777 (-4.026000) | 3.361697 / 3.745712 (-0.384015) | 1.764468 / 5.269862 (-3.505394) | 1.032280 / 4.565676 (-3.533396) | 0.067872 / 0.424275 (-0.356403) | 0.010998 / 0.007607 (0.003391) | 0.525682 / 0.226044 (0.299637) | 5.254356 / 2.268929 (2.985427) | 2.384332 / 55.444624 (-53.060292) | 2.045578 / 6.876477 (-4.830898) | 2.170914 / 2.142072 (0.028841) | 0.674782 / 4.805227 (-4.130445) | 0.135351 / 6.500664 (-6.365314) | 0.066591 / 0.075469 (-0.008878) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209181 / 1.841788 (-0.632606) | 14.044518 / 8.074308 (5.970210) | 13.184705 / 10.191392 (2.993313) | 0.130836 / 0.680424 (-0.549588) | 0.016582 / 0.534201 (-0.517619) | 0.360005 / 0.579283 (-0.219279) | 0.379519 / 0.434364 (-0.054845) | 0.422174 / 0.540337 (-0.118164) | 0.515546 / 1.386936 (-0.871390) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006293 / 0.011353 (-0.005060) | 0.003784 / 0.011008 (-0.007224) | 0.079248 / 0.038508 (0.040739) | 0.038452 / 0.023109 (0.015343) | 0.444727 / 0.275898 (0.168829) | 0.500535 / 0.323480 (0.177055) | 0.003455 / 0.007986 (-0.004531) | 0.002873 / 0.004328 (-0.001455) | 0.077439 / 0.004250 (0.073189) | 0.047855 / 0.037052 (0.010803) | 0.448049 / 0.258489 (0.189560) | 0.509517 / 0.293841 (0.215676) | 0.028359 / 0.128546 (-0.100188) | 0.008503 / 0.075646 (-0.067143) | 0.084961 / 0.419271 (-0.334310) | 0.042880 / 0.043533 (-0.000653) | 0.436628 / 0.255139 (0.181489) | 0.456574 / 0.283200 (0.173375) | 0.019539 / 0.141683 (-0.122144) | 1.561273 / 1.452155 (0.109118) | 1.572018 / 1.492716 (0.079301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230250 / 0.018006 (0.212244) | 0.415189 / 0.000490 (0.414700) | 0.003213 / 0.000200 (0.003013) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025541 / 0.037411 (-0.011871) | 0.102326 / 0.014526 (0.087800) | 0.110258 / 0.176557 (-0.066298) | 0.162488 / 0.737135 (-0.574647) | 0.112782 / 0.296338 (-0.183556) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457936 / 0.215209 (0.242727) | 4.581503 / 2.077655 (2.503848) | 2.237659 / 1.504120 (0.733540) | 2.029960 / 1.541195 (0.488765) | 2.082911 / 1.468490 (0.614421) | 0.556485 / 4.584777 (-4.028292) | 3.384418 / 3.745712 (-0.361295) | 1.748809 / 5.269862 (-3.521053) | 1.034759 / 4.565676 (-3.530917) | 0.067500 / 0.424275 (-0.356776) | 0.011425 / 0.007607 (0.003818) | 0.561340 / 0.226044 (0.335295) | 5.623629 / 2.268929 (3.354701) | 2.733587 / 55.444624 (-52.711038) | 2.401578 / 6.876477 (-4.474899) | 2.524569 / 2.142072 (0.382496) | 0.673170 / 4.805227 (-4.132057) | 0.136681 / 6.500664 (-6.363983) | 0.068060 / 0.075469 (-0.007409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318651 / 1.841788 (-0.523137) | 14.362123 / 8.074308 (6.287815) | 14.385964 / 10.191392 (4.194572) | 0.149914 / 0.680424 (-0.530510) | 0.016877 / 0.534201 (-0.517324) | 0.358406 / 0.579283 (-0.220877) | 0.394349 / 0.434364 (-0.040015) | 0.422471 / 0.540337 (-0.117866) | 0.513807 / 1.386936 (-0.873129) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b9ce11d1b94e6178df663ff5fcad029849d10fb \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005080) | 0.003903 / 0.011008 (-0.007105) | 0.100180 / 0.038508 (0.061672) | 0.037799 / 0.023109 (0.014690) | 0.385627 / 0.275898 (0.109729) | 0.446518 / 0.323480 (0.123038) | 0.004811 / 0.007986 (-0.003175) | 0.003032 / 0.004328 (-0.001296) | 0.077063 / 0.004250 (0.072812) | 0.055564 / 0.037052 (0.018512) | 0.397346 / 0.258489 (0.138857) | 0.443242 / 0.293841 (0.149401) | 0.027904 / 0.128546 (-0.100642) | 0.008386 / 0.075646 (-0.067260) | 0.315013 / 0.419271 (-0.104259) | 0.047943 / 0.043533 (0.004410) | 0.378443 / 0.255139 (0.123304) | 0.411472 / 0.283200 (0.128272) | 0.020465 / 0.141683 (-0.121218) | 1.526594 / 1.452155 (0.074439) | 1.547018 / 1.492716 (0.054301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219377 / 0.018006 (0.201370) | 0.430254 / 0.000490 (0.429764) | 0.003218 / 0.000200 (0.003018) | 0.000072 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023667 / 0.037411 (-0.013744) | 0.099143 / 0.014526 (0.084617) | 0.106044 / 0.176557 (-0.070513) | 0.166186 / 0.737135 (-0.570949) | 0.108736 / 0.296338 (-0.187603) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437971 / 0.215209 (0.222762) | 4.363675 / 2.077655 (2.286021) | 2.011993 / 1.504120 (0.507873) | 1.845189 / 1.541195 (0.303994) | 1.831848 / 1.468490 (0.363358) | 0.562402 / 4.584777 (-4.022375) | 3.365259 / 3.745712 (-0.380453) | 1.781491 / 5.269862 (-3.488371) | 1.023454 / 4.565676 (-3.542223) | 0.067857 / 0.424275 (-0.356418) | 0.011076 / 0.007607 (0.003469) | 0.532267 / 0.226044 (0.306223) | 5.340344 / 2.268929 (3.071415) | 2.388649 / 55.444624 (-53.055976) | 2.055373 / 6.876477 (-4.821104) | 2.205047 / 2.142072 (0.062975) | 0.672909 / 4.805227 (-4.132318) | 0.135244 / 6.500664 (-6.365420) | 0.066184 / 0.075469 (-0.009285) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206838 / 1.841788 (-0.634950) | 13.967075 / 8.074308 (5.892767) | 13.143971 / 10.191392 (2.952579) | 0.143991 / 0.680424 (-0.536433) | 0.016673 / 0.534201 (-0.517527) | 0.376180 / 0.579283 (-0.203103) | 0.386550 / 0.434364 (-0.047814) | 0.440590 / 0.540337 (-0.099747) | 0.529974 / 1.386936 (-0.856962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006299 / 0.011353 (-0.005054) | 0.003784 / 0.011008 (-0.007224) | 0.077875 / 0.038508 (0.039367) | 0.038689 / 0.023109 (0.015580) | 0.421684 / 0.275898 (0.145786) | 0.472649 / 0.323480 (0.149169) | 0.003570 / 0.007986 (-0.004415) | 0.004448 / 0.004328 (0.000120) | 0.077867 / 0.004250 (0.073616) | 0.049514 / 0.037052 (0.012462) | 0.375983 / 0.258489 (0.117494) | 0.470632 / 0.293841 (0.176791) | 0.028238 / 0.128546 (-0.100308) | 0.008462 / 0.075646 (-0.067185) | 0.082452 / 0.419271 (-0.336819) | 0.043617 / 0.043533 (0.000084) | 0.400874 / 0.255139 (0.145735) | 0.426191 / 0.283200 (0.142992) | 0.020602 / 0.141683 (-0.121081) | 1.567658 / 1.452155 (0.115504) | 1.572610 / 1.492716 (0.079893) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246144 / 0.018006 (0.228138) | 0.419402 / 0.000490 (0.418913) | 0.001691 / 0.000200 (0.001491) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026105 / 0.037411 (-0.011306) | 0.104734 / 0.014526 (0.090208) | 0.110257 / 0.176557 (-0.066300) | 0.161429 / 0.737135 (-0.575706) | 0.114367 / 0.296338 (-0.181972) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.453352 / 0.215209 (0.238143) | 4.537924 / 2.077655 (2.460269) | 2.196193 / 1.504120 (0.692073) | 2.002087 / 1.541195 (0.460892) | 2.041722 / 1.468490 (0.573231) | 0.561643 / 4.584777 (-4.023134) | 3.449108 / 3.745712 (-0.296605) | 2.862800 / 5.269862 (-2.407062) | 1.387895 / 4.565676 (-3.177782) | 0.068076 / 0.424275 (-0.356199) | 0.011568 / 0.007607 (0.003961) | 0.559279 / 0.226044 (0.333235) | 5.598738 / 2.268929 (3.329809) | 2.676649 / 55.444624 (-52.767975) | 2.334588 / 6.876477 (-4.541889) | 2.376215 / 2.142072 (0.234142) | 0.673109 / 4.805227 (-4.132118) | 0.137587 / 6.500664 (-6.363077) | 0.069131 / 0.075469 (-0.006338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.307332 / 1.841788 (-0.534456) | 14.536036 / 8.074308 (6.461728) | 14.173734 / 10.191392 (3.982342) | 0.145143 / 0.680424 (-0.535281) | 0.016662 / 0.534201 (-0.517539) | 0.366901 / 0.579283 (-0.212383) | 0.394498 / 0.434364 (-0.039866) | 0.430546 / 0.540337 (-0.109792) | 0.518950 / 1.386936 (-0.867986) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008122 / 0.011353 (-0.003231) | 0.005585 / 0.011008 (-0.005424) | 0.121219 / 0.038508 (0.082711) | 0.047616 / 0.023109 (0.024507) | 0.440576 / 0.275898 (0.164678) | 0.491053 / 0.323480 (0.167573) | 0.004774 / 0.007986 (-0.003211) | 0.006758 / 0.004328 (0.002430) | 0.103852 / 0.004250 (0.099602) | 0.071560 / 0.037052 (0.034508) | 0.463107 / 0.258489 (0.204618) | 0.516904 / 0.293841 (0.223063) | 0.048052 / 0.128546 (-0.080494) | 0.013679 / 0.075646 (-0.061968) | 0.428383 / 0.419271 (0.009112) | 0.069468 / 0.043533 (0.025936) | 0.432593 / 0.255139 (0.177454) | 0.471810 / 0.283200 (0.188611) | 0.037541 / 0.141683 (-0.104142) | 1.823490 / 1.452155 (0.371335) | 1.922558 / 1.492716 (0.429842) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.252315 / 0.018006 (0.234309) | 0.541757 / 0.000490 (0.541267) | 0.000373 / 0.000200 (0.000173) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030361 / 0.037411 (-0.007050) | 0.125928 / 0.014526 (0.111402) | 0.145102 / 0.176557 (-0.031455) | 0.209798 / 0.737135 (-0.527337) | 0.147349 / 0.296338 (-0.148990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627554 / 0.215209 (0.412345) | 5.917422 / 2.077655 (3.839767) | 2.491083 / 1.504120 (0.986963) | 2.147078 / 1.541195 (0.605883) | 2.167511 / 1.468490 (0.699021) | 0.903061 / 4.584777 (-3.681716) | 5.518537 / 3.745712 (1.772825) | 2.654348 / 5.269862 (-2.615514) | 1.645121 / 4.565676 (-2.920556) | 0.103782 / 0.424275 (-0.320493) | 0.013048 / 0.007607 (0.005441) | 0.756732 / 0.226044 (0.530687) | 7.622873 / 2.268929 (5.353945) | 3.122689 / 55.444624 (-52.321936) | 2.537735 / 6.876477 (-4.338742) | 2.640090 / 2.142072 (0.498018) | 1.128635 / 4.805227 (-3.676593) | 0.228089 / 6.500664 (-6.272575) | 0.086207 / 0.075469 (0.010738) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561591 / 1.841788 (-0.280197) | 18.110299 / 8.074308 (10.035991) | 20.718017 / 10.191392 (10.526625) | 0.225741 / 0.680424 (-0.454682) | 0.031738 / 0.534201 (-0.502463) | 0.530789 / 0.579283 (-0.048495) | 0.607364 / 0.434364 (0.173000) | 0.581593 / 0.540337 (0.041256) | 0.726033 / 1.386936 (-0.660903) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009323 / 0.011353 (-0.002030) | 0.005360 / 0.011008 (-0.005649) | 0.103608 / 0.038508 (0.065100) | 0.050158 / 0.023109 (0.027049) | 0.499906 / 0.275898 (0.224008) | 0.561005 / 0.323480 (0.237525) | 0.005093 / 0.007986 (-0.002892) | 0.008285 / 0.004328 (0.003956) | 0.103446 / 0.004250 (0.099196) | 0.061478 / 0.037052 (0.024426) | 0.494016 / 0.258489 (0.235527) | 0.537550 / 0.293841 (0.243709) | 0.048829 / 0.128546 (-0.079717) | 0.017032 / 0.075646 (-0.058614) | 0.107748 / 0.419271 (-0.311524) | 0.065607 / 0.043533 (0.022074) | 0.488709 / 0.255139 (0.233570) | 0.512023 / 0.283200 (0.228823) | 0.032067 / 0.141683 (-0.109616) | 1.907585 / 1.452155 (0.455431) | 1.960994 / 1.492716 (0.468278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.278378 / 0.018006 (0.260371) | 0.551474 / 0.000490 (0.550985) | 0.006886 / 0.000200 (0.006686) | 0.000106 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030674 / 0.037411 (-0.006737) | 0.135179 / 0.014526 (0.120654) | 0.133703 / 0.176557 (-0.042853) | 0.198923 / 0.737135 (-0.538212) | 0.155108 / 0.296338 (-0.141231) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.690566 / 0.215209 (0.475357) | 6.789594 / 2.077655 (4.711940) | 2.940668 / 1.504120 (1.436549) | 2.562431 / 1.541195 (1.021236) | 2.554232 / 1.468490 (1.085742) | 0.888470 / 4.584777 (-3.696307) | 5.672318 / 3.745712 (1.926606) | 2.741626 / 5.269862 (-2.528236) | 1.818336 / 4.565676 (-2.747340) | 0.110434 / 0.424275 (-0.313841) | 0.014114 / 0.007607 (0.006507) | 0.830632 / 0.226044 (0.604588) | 8.270787 / 2.268929 (6.001859) | 3.723486 / 55.444624 (-51.721139) | 2.993671 / 6.876477 (-3.882806) | 2.918273 / 2.142072 (0.776201) | 1.105337 / 4.805227 (-3.699891) | 0.222976 / 6.500664 (-6.277688) | 0.085290 / 0.075469 (0.009820) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.816027 / 1.841788 (-0.025760) | 18.496850 / 8.074308 (10.422541) | 20.457032 / 10.191392 (10.265640) | 0.243533 / 0.680424 (-0.436891) | 0.027044 / 0.534201 (-0.507157) | 0.500752 / 0.579283 (-0.078531) | 0.620963 / 0.434364 (0.186599) | 0.607995 / 0.540337 (0.067658) | 0.722915 / 1.386936 (-0.664021) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#682d21e94ab1e64c11b583de39dc4c93f0101c5a \"CML watermark\")\n"
] | "2023-06-22T18:23:11Z" | "2023-06-22T18:40:24Z" | "2023-06-22T18:30:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"merged_at": "2023-06-22T18:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5978/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3928/comments | https://api.github.com/repos/huggingface/datasets/issues/3928/events | https://github.com/huggingface/datasets/issues/3928 | 1,170,017,132 | I_kwDODunzps5FvQts | 3,928 | Frugal score deprecations | {
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"events_url": "https://api.github.com/users/ierezell/events{/privacy}",
"followers_url": "https://api.github.com/users/ierezell/followers",
"following_url": "https://api.github.com/users/ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/ierezell/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ierezell",
"id": 30974685,
"login": "ierezell",
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"organizations_url": "https://api.github.com/users/ierezell/orgs",
"received_events_url": "https://api.github.com/users/ierezell/received_events",
"repos_url": "https://api.github.com/users/ierezell/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ierezell/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ierezell"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. "
] | "2022-03-15T18:10:42Z" | "2022-03-17T08:37:24Z" | "2022-03-17T08:37:24Z" | NONE | null | null | null | ## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"])
```
## Expected results
A clear and concise description of the expected results.
```
{'scores': [0.9946]}
```
## Actual results
Specify the actual results or traceback.
```
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 864.09ba/s]
Using amp half precision backend
The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 1
Batch size = 64
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 4644.85it/s]
{'scores': [0.9946]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3928/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3039/comments | https://api.github.com/repos/huggingface/datasets/issues/3039/events | https://github.com/huggingface/datasets/pull/3039 | 1,018,219,800 | PR_kwDODunzps4sy_J- | 3,039 | Add sberquad dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Alenush",
"id": 13781234,
"login": "Alenush",
"node_id": "MDQ6VXNlcjEzNzgxMjM0",
"organizations_url": "https://api.github.com/users/Alenush/orgs",
"received_events_url": "https://api.github.com/users/Alenush/received_events",
"repos_url": "https://api.github.com/users/Alenush/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alenush/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Alenush"
} | [] | closed | false | null | [] | null | [] | "2021-10-06T12:32:02Z" | "2021-10-13T10:19:11Z" | "2021-10-13T10:16:04Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3039",
"merged_at": "2021-10-13T10:16:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3039"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3039/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5856/comments | https://api.github.com/repos/huggingface/datasets/issues/5856/events | https://github.com/huggingface/datasets/issues/5856 | 1,709,218,242 | I_kwDODunzps5l4JnC | 5,856 | Error loading natural_questions | {
"avatar_url": "https://avatars.githubusercontent.com/u/19185508?v=4",
"events_url": "https://api.github.com/users/Crownor/events{/privacy}",
"followers_url": "https://api.github.com/users/Crownor/followers",
"following_url": "https://api.github.com/users/Crownor/following{/other_user}",
"gists_url": "https://api.github.com/users/Crownor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Crownor",
"id": 19185508,
"login": "Crownor",
"node_id": "MDQ6VXNlcjE5MTg1NTA4",
"organizations_url": "https://api.github.com/users/Crownor/orgs",
"received_events_url": "https://api.github.com/users/Crownor/received_events",
"repos_url": "https://api.github.com/users/Crownor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Crownor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crownor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Crownor"
} | [] | closed | false | null | [] | null | [
"Hi! You can avoid this error by using the preprocessed version:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset('natural_questions')\r\n```\r\n\r\nPS: Once we finish https://github.com/huggingface/datasets/pull/5364, this error will no longer be a problem.",
"> Hi! You can avoid this error by using the preprocessed version:\r\n> \r\n> ```python\r\n> import datasets\r\n> ds = datasets.load_dataset('natural_questions')\r\n> ```\r\n> \r\n> PS: Once we finish #5364, this error will no longer be a problem.\r\n\r\nThanks, wish #5364 finish early"
] | "2023-05-15T02:46:04Z" | "2023-06-05T09:11:19Z" | "2023-06-05T09:11:18Z" | NONE | null | null | null | ### Describe the bug
When try to load natural_questions through datasets == 2.12.0 with python == 3.8.9:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
It failed with following info:
`pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs`
### Steps to reproduce the bug
In python console:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
Then the trace is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 2019, in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 694, in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 737, in parquet_to_arrow
for record_batch in parquet_file.iter_batches():
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
### Expected behavior
load natural_question questions
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.9
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5856/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6055/comments | https://api.github.com/repos/huggingface/datasets/issues/6055/events | https://github.com/huggingface/datasets/issues/6055 | 1,813,524,145 | I_kwDODunzps5sGC6x | 6,055 | Fix host URL in The Pile datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/7540752?v=4",
"events_url": "https://api.github.com/users/nickovchinnikov/events{/privacy}",
"followers_url": "https://api.github.com/users/nickovchinnikov/followers",
"following_url": "https://api.github.com/users/nickovchinnikov/following{/other_user}",
"gists_url": "https://api.github.com/users/nickovchinnikov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nickovchinnikov",
"id": 7540752,
"login": "nickovchinnikov",
"node_id": "MDQ6VXNlcjc1NDA3NTI=",
"organizations_url": "https://api.github.com/users/nickovchinnikov/orgs",
"received_events_url": "https://api.github.com/users/nickovchinnikov/received_events",
"repos_url": "https://api.github.com/users/nickovchinnikov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nickovchinnikov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickovchinnikov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nickovchinnikov"
} | [] | open | false | null | [] | null | [] | "2023-07-20T09:08:52Z" | "2023-07-20T09:09:37Z" | null | NONE | null | null | null | ### Describe the bug
In #3627 and #5543, you tried to fix the host URL in The Pile datasets. But both URLs are not working now:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
And
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
### Steps to reproduce the bug
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://mystic.the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`ConnectTimeout: HTTPSConnectionPool(host='mystic.the-eye.eu', port=443): Max retries exceeded with url: /public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst (Caused by ConnectTimeoutError(, 'Connection to mystic.the-eye.eu timed out. (connect timeout=10.0)'))`
And
```
from datasets import load_dataset
# This takes a few minutes to run, so go grab a tea or coffee while you wait :)
data_files = "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst"
pubmed_dataset = load_dataset("json", data_files=data_files, split="train")
pubmed_dataset
```
Result:
`HTTPError: 404 Client Error: Not Found for URL: https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst`
### Expected behavior
Downloading as normal.
### Environment info
Environment info
`datasets` version: 2.9.0
Platform: Windows
Python version: 3.9.13
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6055/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4105/comments | https://api.github.com/repos/huggingface/datasets/issues/4105/events | https://github.com/huggingface/datasets/issues/4105 | 1,194,297,119 | I_kwDODunzps5HL4cf | 4,105 | push to hub fails with huggingface-hub 0.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frascuchon",
"id": 2518789,
"login": "frascuchon",
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frascuchon"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! Indeed there was a breaking change in `huggingface_hub` 0.5.0 in `HfApi.create_repo`, which is called here in `datasets` by passing the org name in both the `repo_id` and the `organization` arguments:\r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nI think we should fix that in `huggingface_hub`, will keep you posted. In the meantime please use `huggingface_hub` 0.4.0",
"I'll be sending a fix for this later today on the `huggingface_hub` side.\r\n\r\nThe error would be converted to a `FutureWarning` if `datasets` uses kwargs instead of positional, for example here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/2230f7f7d7fbaf102cff356f5a8f3bd1561bea43/src/datasets/arrow_dataset.py#L3363-L3369\r\n\r\nto be:\r\n\r\n``` python\r\n api.create_repo(\r\n name=dataset_name,\r\n token=token,\r\n repo_type=\"dataset\",\r\n organization=organization,\r\n private=private,\r\n )\r\n```\r\n\r\nBut `name` and `organization` are deprecated in `huggingface_hub=0.5`, and people should pass `repo_id='org/name` instead. Note that `repo_id` was introduced in 0.5 and if `datasets` wants to support older `huggingface_hub` versions (which I encourage it to do), there needs to be a helper function to do that. It can be something like:\r\n\r\n\r\n```python\r\ndef create_repo(\r\n client,\r\n name: str,\r\n token: Optional[str] = None,\r\n organization: Optional[str] = None,\r\n private: Optional[bool] = None,\r\n repo_type: Optional[str] = None,\r\n exist_ok: Optional[bool] = False,\r\n space_sdk: Optional[str] = None,\r\n) -> str:\r\n try:\r\n return client.create_repo(\r\n repo_id=f\"{organization}/{name}\",\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n except TypeError:\r\n return client.create_repo(\r\n name=name,\r\n organization=organization,\r\n token=token,\r\n private=private,\r\n repo_type=repo_type,\r\n exist_ok=exist_ok,\r\n space_sdk=space_sdk,\r\n )\r\n```\r\n\r\nin a `utils/_fixes.py` kinda file and and be used internally.\r\n\r\nI'll be sending a patch to `huggingface_hub` to convert the error reported in this issue to a `FutureWarning`.",
"PR with the hotfix on the `huggingface_hub` side: https://github.com/huggingface/huggingface_hub/pull/822",
"We can definitely change `push_to_hub` to use `repo_id` in `datasets` and require `huggingface_hub>=0.5.0`.\r\n\r\nLet me open a PR :)",
"`huggingface_hub` 0.5.1 just got released with a fix, feel free to update `huggingface_hub` ;)"
] | "2022-04-06T08:59:57Z" | "2022-04-13T14:30:47Z" | "2022-04-13T14:30:47Z" | NONE | null | null | null | ## Describe the bug
`ds.push_to_hub` is failing when updating a dataset in the form "org_id/repo_id"
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("rubrix/news_test")
ds.push_to_hub("<your-user>/news_test", token="<your-token>")
```
## Expected results
The dataset is successfully uploaded
## Actual results
An error validation is raised:
```bash
if repo_id and (name or organization):
> raise ValueError(
"Only pass `repo_id` and leave deprecated `name` and "
"`organization` to be None."
E ValueError: Only pass `repo_id` and leave deprecated `name` and `organization` to be None.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- `huggingface-hub`: 0.5
- Platform: macOS
- Python version: 3.8.12
- PyArrow version: 6.0.0
cc @adrinjalali
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4105/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4105/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1844/comments | https://api.github.com/repos/huggingface/datasets/issues/1844/events | https://github.com/huggingface/datasets/issues/1844 | 803,588,125 | MDU6SXNzdWU4MDM1ODgxMjU= | 1,844 | Update Open Subtitles corpus with original sentence IDs | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Valahaar",
"id": 19476123,
"login": "Valahaar",
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"organizations_url": "https://api.github.com/users/Valahaar/orgs",
"received_events_url": "https://api.github.com/users/Valahaar/received_events",
"repos_url": "https://api.github.com/users/Valahaar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Valahaar"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | [] | null | [
"Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L103)",
"Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: \r\n(the following is line `22497315` of the `ids` file of the `de-en` dump)\r\n\r\n\r\n`de/2017/7006210/7063319.xml.gz en/2017/7006210/7050201.xml.gz 335 339 340` (every space is actually a tab, aside from the space between `339` and `340`)\r\n\r\n\r\nWhere filenames encode the information like this: `lang/year/imdb_id/opensubtitles_id.xml.gz` whereas the numbers correspond to the sentence ids which are linked together (i.e. sentence `335` of the German subtitle corresponds to lines `339` and `340` of the English file)\r\n\r\nThat being said, do you think I should stick to the raw sentence id (and replace the current sequential id) or should I include more detailed metadata (or both things maybe)?\r\n\r\nGoing with raw ID is surely simpler, but including `year`, `imdbId` and `subtitleId` should save space as they're just integers; besides, any operation (like filtering or grouping) will be much easier if users don't have to manually parse the ids every time.\r\nAs for the language-specific sentenceIds, what could be the best option? A list of integers or a comma-separated string?\r\n\r\n**Note:** I did not find any official information about this encoding, but it appears to check out:\r\nhttps://www.imdb.com/title/tt7006210/, https://www.opensubtitles.org/en/subtitles/7063319 and https://www.opensubtitles.org/en/subtitles/7050201 all link to the same episode, so I guess (I hope!) it's correct.\r\n\r\n",
"I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example.\r\nAnd for the `sentenceIds` a list of integers is fine.",
"Thanks for improving it @Valahaar :) ",
"Something like this? (adapted from [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L114))\r\n\r\n```python\r\nresult = (\r\n sentence_counter,\r\n {\r\n \"id\": str(sentence_counter),\r\n \"meta\": {\r\n \"year\": year,\r\n \"imdbId\": imdb_id,\r\n \"subtitleId\": {l1: l1_sub_id, l2: l2_sub_id},\r\n \"sentenceIds\": {l1: [... source_sids ...], l2: [... target_sids ...]},\r\n # or maybe src/tgt? I'd go with the first one for consistency with 'translation'\r\n \"subtitleId\": {\"src\": l1_sub_id, \"tgt\": l2_sub_id},\r\n \"sentenceIds\": {\"src\": [... source_sids ...], \"tgt\": [... target_sids ...]},\r\n },\r\n \"translation\": {l1: x, l2: y},\r\n },\r\n )\r\n```\r\nOr at top level, avoiding nesting into 'meta'?",
"Merged in #1865, closing. Thanks :)"
] | "2021-02-08T13:55:13Z" | "2021-02-12T17:38:58Z" | "2021-02-12T17:38:58Z" | CONTRIBUTOR | null | null | null | Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles).
I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts.
I think I should tag @abhishekkrthakur as he's the one who added it in the first place.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1844/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3705/comments | https://api.github.com/repos/huggingface/datasets/issues/3705/events | https://github.com/huggingface/datasets/pull/3705 | 1,132,053,226 | PR_kwDODunzps4yfhyj | 3,705 | Raise informative error when loading a save_to_disk dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2022-02-11T08:21:03Z" | "2022-02-11T22:56:40Z" | "2022-02-11T22:56:39Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3705.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3705",
"merged_at": "2022-02-11T22:56:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3705.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3705"
} | People recurrently report error when trying to load a dataset (using `load_dataset`) that was previously saved using `save_to_disk`.
This PR raises an informative error message telling them they should use `load_from_disk` instead.
Close #3700. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3705/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2367/comments | https://api.github.com/repos/huggingface/datasets/issues/2367/events | https://github.com/huggingface/datasets/pull/2367 | 893,317,427 | MDExOlB1bGxSZXF1ZXN0NjQ1ODUxNTE0 | 2,367 | Remove getchildren from hyperpartisan news detection | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | [] | "2021-05-17T13:10:37Z" | "2021-05-17T14:07:13Z" | "2021-05-17T14:07:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2367",
"merged_at": "2021-05-17T14:07:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2367"
} | `Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails).
https://bugs.python.org/issue29209 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2367/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3899/comments | https://api.github.com/repos/huggingface/datasets/issues/3899/events | https://github.com/huggingface/datasets/pull/3899 | 1,166,931,812 | PR_kwDODunzps40UzR3 | 3,899 | Add exact match metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-11T22:21:40Z" | "2022-03-21T16:10:03Z" | "2022-03-21T16:05:35Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3899",
"merged_at": "2022-03-21T16:05:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3899"
} | Adding the exact match metric and its metric card.
Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3899/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3899/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3618/comments | https://api.github.com/repos/huggingface/datasets/issues/3618/events | https://github.com/huggingface/datasets/issues/3618 | 1,112,123,365 | I_kwDODunzps5CSafl | 3,618 | TIMIT Dataset not working with GPU | {
"avatar_url": "https://avatars.githubusercontent.com/u/3227869?v=4",
"events_url": "https://api.github.com/users/TheSeamau5/events{/privacy}",
"followers_url": "https://api.github.com/users/TheSeamau5/followers",
"following_url": "https://api.github.com/users/TheSeamau5/following{/other_user}",
"gists_url": "https://api.github.com/users/TheSeamau5/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TheSeamau5",
"id": 3227869,
"login": "TheSeamau5",
"node_id": "MDQ6VXNlcjMyMjc4Njk=",
"organizations_url": "https://api.github.com/users/TheSeamau5/orgs",
"received_events_url": "https://api.github.com/users/TheSeamau5/received_events",
"repos_url": "https://api.github.com/users/TheSeamau5/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TheSeamau5/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TheSeamau5/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TheSeamau5"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge.\r\n\r\nIf you want to access the audio data of some samples, you should do this instead `timit_train[:10][\"train\"]` for example.\r\n\r\nOther than that, I'm not sure why you get a `TypeError: string indices must be integers`, do you have a code snippet that reproduces the issue that you can share here ?",
"I get the same error when I try to do `timit_train[0]` or really any indexing into the whole thing. \r\n\r\nReally, that IS the code snippet that reproduces the issue. If you index into other fields like 'file' or whatever, it works. As soon as one of the fields you're looking into is 'audio', you get that issue. It's a weird issue and I suspect it's Sagemaker/environment related, maybe the mix of libraries and dependencies are not good. \r\n\r\n\r\nExample code snippet with issue. \r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntimit_train = load_dataset('timit_asr', split='train')\r\nprint(timit_train[0])\r\n```",
"Ok I see ! From the error you got, it looks like the `value` encoded in the arrow file of the TIMIT dataset you loaded is a string instead of a dictionary with keys \"path\" and \"bytes\" but we don't support this since 1.18\r\n\r\nCan you try regenerating the dataset with `load_dataset('timit_asr', download_mode=\"force_redownload\")` please ? I think it should fix the issue."
] | "2022-01-24T03:26:03Z" | "2023-07-25T15:20:20Z" | "2023-07-25T15:20:20Z" | NONE | null | null | null | ## Describe the bug
I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU.
I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4dn.xlarge instance (corresponds to a Tesla T4 GPU).
I don't know if the issue is GPU related or Python environment related because everything works when I work off of the CPU Optimized environment with a non-GPU instance. My code also works on Google Colab with a GPU instance.
This issue is blocking because I cannot get the 'audio' column in any way due to this error, which means that I can't pass it to any functions. I later use the dataset.map function and that is where I originally noticed this error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit_train = load_dataset('timit_asr', split='train')
print(timit_train['audio'])
```
## Expected results
Expected to see inside the 'audio' column, which contains an 'array' nested field with the array data I actually need.
## Actual results
Traceback
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-ceeac555e921> in <module>
----> 1 timit_train['audio']
/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1917 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1918 return self._getitem(
-> 1919 key,
1920 )
1921
/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1902 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1903 formatted_output = format_table(
-> 1904 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1905 )
1906 return formatted_output
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
529 python_formatter = PythonFormatter(features=None)
530 if format_columns is None:
--> 531 return formatter(pa_table, query_type=query_type)
532 elif query_type == "column":
533 if key in format_columns:
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
280 return self.format_row(pa_table)
281 elif query_type == "column":
--> 282 return self.format_column(pa_table)
283 elif query_type == "batch":
284 return self.format_batch(pa_table)
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_column(self, pa_table)
315 column = self.python_arrow_extractor().extract_column(pa_table)
316 if self.decoded:
--> 317 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
318 return column
319
/opt/conda/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_column(self, column, column_name)
221
222 def decode_column(self, column: list, column_name: str) -> list:
--> 223 return self.features.decode_column(column, column_name) if self.features else column
224
225 def decode_batch(self, batch: dict) -> dict:
/opt/conda/lib/python3.6/site-packages/datasets/features/features.py in decode_column(self, column, column_name)
1337 return (
1338 [self[column_name].decode_example(value) if value is not None else None for value in column]
-> 1339 if self._column_requires_decoding[column_name]
1340 else column
1341 )
/opt/conda/lib/python3.6/site-packages/datasets/features/features.py in <listcomp>(.0)
1336 """
1337 return (
-> 1338 [self[column_name].decode_example(value) if value is not None else None for value in column]
1339 if self._column_requires_decoding[column_name]
1340 else column
/opt/conda/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
85 dict
86 """
---> 87 path, file = (value["path"], BytesIO(value["bytes"])) if value["bytes"] is not None else (value["path"], None)
88 if path is None and file is None:
89 raise ValueError(f"An audio sample should have one of 'path' or 'bytes' but both are None in {value}.")
TypeError: string indices must be integers
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.0
- Platform: Linux-4.14.256-197.484.amzn2.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3618/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5151/comments | https://api.github.com/repos/huggingface/datasets/issues/5151/events | https://github.com/huggingface/datasets/issues/5151 | 1,420,791,163 | I_kwDODunzps5Ur417 | 5,151 | Add support to create different configs with `push_to_hub` (+ inferring configs from directories with package managers?) | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
}
] | null | [
"also asked in https://discuss.huggingface.co/t/create-multiple-dataset-configs-with-push-to-hub-method/25480"
] | "2022-10-24T12:59:18Z" | "2022-11-04T14:55:20Z" | null | CONTRIBUTOR | null | null | null | Now one can push only different splits within one default config of a dataset.
Would be nice to allow something like:
```
ds.push_to_hub(repo_name, config=config_name)
```
I'm not sure, but this will probably require changes in `data_files.py` patterns. If so, it would also allow to create different configs for packaged modules datasets.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5151/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2012/comments | https://api.github.com/repos/huggingface/datasets/issues/2012/events | https://github.com/huggingface/datasets/issues/2012 | 825,634,064 | MDU6SXNzdWU4MjU2MzQwNjQ= | 2,012 | No upstream branch | {
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/theo-m",
"id": 17948980,
"login": "theo-m",
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"repos_url": "https://api.github.com/users/theo-m/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"type": "User",
"url": "https://api.github.com/users/theo-m"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L10-L14",
"~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 "
] | "2021-03-09T09:48:55Z" | "2021-03-09T11:33:31Z" | "2021-03-09T11:33:31Z" | CONTRIBUTOR | null | null | null | Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2012/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3673/comments | https://api.github.com/repos/huggingface/datasets/issues/3673/events | https://github.com/huggingface/datasets/issues/3673 | 1,123,010,520 | I_kwDODunzps5C78fY | 3,673 | `load_dataset("snli")` is different from dataset viewer | {
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pietrolesci",
"id": 61748653,
"login": "pietrolesci",
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pietrolesci"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [
"Yes, we decided to replace the encoded label with the corresponding label when possible in the dataset viewer. But\r\n1. maybe it's the wrong default\r\n2. we could find a way to show both (with a switch, or showing both ie. `0 (neutral)`).\r\n",
"Hi @severo,\r\n\r\nThanks for clarifying. \r\n\r\nI think this default is a bit counterintuitive for the user. However, this is a personal opinion that might not be general. I think it is nice to have the actual (non-encoded) labels in the viewer. On the other hand, it would be nice to match what the user sees with what they get when they download a dataset. I don't know - I can see the difficulty of choosing a default :)\r\nMaybe having non-encoded labels as a default can be useful?\r\n\r\nAnyway, I think the issue has been addressed. Thanks a lot for your super-quick answer!\r\n\r\n ",
"Thanks for the 👍 in https://github.com/huggingface/datasets/issues/3673#issuecomment-1029008349 @mariosasko @gary149 @pietrolesci, but as I proposed various solutions, it's not clear to me which you prefer. Could you write your preferences as a comment?\r\n\r\n_(note for myself: one idea per comment in the future)_",
"As I am working with seq2seq, I prefer having the label in string form rather than numeric. So the viewer is fine and the underlying dataset should be \"decoded\" (from int to str). In this way, the user does not have to search for a mapping `int -> original name` (even though is trivial to find, I reckon). Also, encoding labels is rather easy.\r\n\r\nI hope this is useful",
"I like the idea of \"0 (neutral)\". The label name can even be greyed to make it clear that it's not part of the actual item in the dataset, it's just the meaning.",
"I like @lhoestq's idea of having grayed-out labels.",
"Proposals by @gary149. Which one do you prefer? Please vote with the thumbs\r\n\r\n- 👍 \r\n\r\n ![image](https://user-images.githubusercontent.com/1676121/152387949-883c7d7e-a9f3-48aa-bff9-11a691555e6e.png)\r\n\r\n- 👎 \r\n\r\n ![image (1)](https://user-images.githubusercontent.com/1676121/152388061-32d95e42-cade-4ae4-9a77-7365e7b72b8f.png)\r\n\r\n",
"I like Option 1 better as it shows clearly what the user is downloading",
"Thanks! ",
"It's [live](https://huggingface.co/datasets/glue/viewer/cola/train):\r\n\r\n<img width=\"1126\" alt=\"Capture d’écran 2022-02-14 à 10 26 03\" src=\"https://user-images.githubusercontent.com/1676121/153836716-25f6205b-96af-42d8-880a-7c09cb24c420.png\">\r\n\r\nThanks all for the help to improve the UI!",
"Love it ! thanks :)"
] | "2022-02-03T12:10:43Z" | "2022-02-16T11:22:31Z" | "2022-02-11T17:01:21Z" | NONE | null | null | null | ## Describe the bug
The dataset that is downloaded from the Hub via `load_dataset("snli")` is different from what is available in the dataset viewer. In the viewer the labels are not encoded (i.e., "neutral", "entailment", "contradiction"), while the downloaded dataset shows the encoded labels (i.e., 0, 1, 2).
Is this expected?
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 20.4
- Python version: 3.7
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3673/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1817/comments | https://api.github.com/repos/huggingface/datasets/issues/1817/events | https://github.com/huggingface/datasets/issues/1817 | 800,870,652 | MDU6SXNzdWU4MDA4NzA2NTI= | 1,817 | pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4",
"events_url": "https://api.github.com/users/LuCeHe/events{/privacy}",
"followers_url": "https://api.github.com/users/LuCeHe/followers",
"following_url": "https://api.github.com/users/LuCeHe/following{/other_user}",
"gists_url": "https://api.github.com/users/LuCeHe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LuCeHe",
"id": 9610770,
"login": "LuCeHe",
"node_id": "MDQ6VXNlcjk2MTA3NzA=",
"organizations_url": "https://api.github.com/users/LuCeHe/orgs",
"received_events_url": "https://api.github.com/users/LuCeHe/received_events",
"repos_url": "https://api.github.com/users/LuCeHe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LuCeHe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LuCeHe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LuCeHe"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nThe error you have is due to the `input_ids` column not having the same number of examples as the other columns.\r\nIndeed you're concatenating the `input_ids` at this line:\r\n\r\nhttps://github.com/LuCeHe/GenericTools/blob/431835d8e13ec24dceb5ee4dc4ae58f0e873b091/KerasTools/lm_preprocessing.py#L134\r\n\r\nHowever the other columns are kept unchanged, and therefore you end up with an `input_ids` column with 599 elements while the others columns like `attention_mask` have 1500.\r\n\r\nTo fix that you can instead concatenate them all using\r\n```python\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\n```\r\n\r\nAlso you may need to drop the \"text\" column before applying `group_texts` since strings can't be concatenated with lists. You can drop it at the tokenization step:\r\n```python\r\ndset = dset.map(\r\n tokenize_function,\r\n batched=True,\r\n remove_columns=[\"text\"]\r\n)\r\n```",
"You saved my life."
] | "2021-02-04T02:30:23Z" | "2022-10-05T12:42:57Z" | "2022-10-05T12:42:57Z" | NONE | null | null | null | I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end
https://github.com/LuCeHe/GenericTools/blob/master/KerasTools/lm_preprocessing.py
In the last iteration of the last dset.map, it gives the error that I copied in the title. Another issue that I have, if I leave the batch_size set as 1000 in the last .map, I'm afraid it's going to lose most text, so I'm considering setting both writer_batch_size and batch_size to 300 K, but I'm not sure it's the best way to go.
Can you help me?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1817/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3528/comments | https://api.github.com/repos/huggingface/datasets/issues/3528/events | https://github.com/huggingface/datasets/pull/3528 | 1,093,844,616 | PR_kwDODunzps4wiOqH | 3,528 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4",
"events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}",
"followers_url": "https://api.github.com/users/meg-huggingface/followers",
"following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}",
"gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/meg-huggingface",
"id": 90473723,
"login": "meg-huggingface",
"node_id": "MDQ6VXNlcjkwNDczNzIz",
"organizations_url": "https://api.github.com/users/meg-huggingface/orgs",
"received_events_url": "https://api.github.com/users/meg-huggingface/received_events",
"repos_url": "https://api.github.com/users/meg-huggingface/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions",
"type": "User",
"url": "https://api.github.com/users/meg-huggingface"
}
] | null | [] | "2022-01-04T23:48:11Z" | "2022-01-05T12:49:41Z" | "2022-01-05T12:49:40Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3528.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3528",
"merged_at": "2022-01-05T12:49:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3528.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3528"
} | Updating license with appropriate capitalization & a link.
Updating Personal and Sensitive Information to address PII concern. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3528/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5364/comments | https://api.github.com/repos/huggingface/datasets/issues/5364/events | https://github.com/huggingface/datasets/pull/5364 | 1,498,360,628 | PR_kwDODunzps5Fiss1 | 5,364 | Support for writing arrow files directly with BeamWriter | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5364). All of your documentation changes will be reflected on that endpoint.",
"Deleting `BeamPipeline` and `upload_local_to_remote` would break the existing Beam scripts, so I reverted this change.\r\n\r\nFrom what I understand, we need these components in our scripts for the pattern:\r\n```python\r\nif not pipeline.is_local():\r\n dl_manager.ship_files_with_pipeline()\r\n```\r\n\r\nI plan to address this in a subsequent PR by (implicitly) downloading the files directly to the remote storage of the non-local runners.",
"I got `AttributeError: 'Pipeline' object has no attribute 'is_local'` when running\r\n```python\r\nload_dataset(\"wikipedia\", language=\"af\", date=\"20230101\", beam_runner=\"DirectRunner\")\r\n```\r\n```python\r\n~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline)\r\n 965 # Use dictionary since testing mock always returns the same result.\r\n 966 downloaded_files = dl_manager.download({\"xml\": xml_urls})\r\n--> 967 if not pipeline.is_local():\r\n 968 downloaded_files = dl_manager.ship_files_with_pipeline(downloaded_files, pipeline)\r\n 969 \r\n\r\nAttributeError: 'Pipeline' object has no attribute 'is_local'\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010649 / 0.011353 (-0.000704) | 0.006116 / 0.011008 (-0.004892) | 0.115568 / 0.038508 (0.077060) | 0.041704 / 0.023109 (0.018595) | 0.360459 / 0.275898 (0.084561) | 0.425679 / 0.323480 (0.102200) | 0.008992 / 0.007986 (0.001006) | 0.006321 / 0.004328 (0.001993) | 0.090223 / 0.004250 (0.085973) | 0.049877 / 0.037052 (0.012824) | 0.382447 / 0.258489 (0.123958) | 0.406567 / 0.293841 (0.112726) | 0.045138 / 0.128546 (-0.083409) | 0.014203 / 0.075646 (-0.061444) | 0.388897 / 0.419271 (-0.030375) | 0.057176 / 0.043533 (0.013644) | 0.358729 / 0.255139 (0.103590) | 0.386086 / 0.283200 (0.102887) | 0.119221 / 0.141683 (-0.022462) | 1.731574 / 1.452155 (0.279419) | 1.744103 / 1.492716 (0.251386) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230380 / 0.018006 (0.212373) | 0.493690 / 0.000490 (0.493201) | 0.005150 / 0.000200 (0.004950) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030771 / 0.037411 (-0.006641) | 0.123196 / 0.014526 (0.108671) | 0.134097 / 0.176557 (-0.042459) | 0.190442 / 0.737135 (-0.546693) | 0.138416 / 0.296338 (-0.157923) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.469763 / 0.215209 (0.254554) | 4.682847 / 2.077655 (2.605192) | 2.076717 / 1.504120 (0.572597) | 1.843721 / 1.541195 (0.302527) | 1.923486 / 1.468490 (0.454996) | 0.817680 / 4.584777 (-3.767097) | 4.482409 / 3.745712 (0.736697) | 3.898695 / 5.269862 (-1.371167) | 2.078291 / 4.565676 (-2.487386) | 0.100285 / 0.424275 (-0.323990) | 0.014761 / 0.007607 (0.007154) | 0.611261 / 0.226044 (0.385217) | 5.926919 / 2.268929 (3.657990) | 2.685080 / 55.444624 (-52.759544) | 2.232179 / 6.876477 (-4.644298) | 2.305576 / 2.142072 (0.163504) | 0.993729 / 4.805227 (-3.811498) | 0.194491 / 6.500664 (-6.306173) | 0.074176 / 0.075469 (-0.001293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.388592 / 1.841788 (-0.453196) | 17.146945 / 8.074308 (9.072636) | 15.989570 / 10.191392 (5.798178) | 0.200147 / 0.680424 (-0.480277) | 0.034009 / 0.534201 (-0.500192) | 0.517531 / 0.579283 (-0.061753) | 0.533966 / 0.434364 (0.099602) | 0.637024 / 0.540337 (0.096687) | 0.749166 / 1.386936 (-0.637770) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008240 / 0.011353 (-0.003113) | 0.006139 / 0.011008 (-0.004869) | 0.112258 / 0.038508 (0.073750) | 0.039001 / 0.023109 (0.015891) | 0.449467 / 0.275898 (0.173569) | 0.483422 / 0.323480 (0.159942) | 0.006176 / 0.007986 (-0.001810) | 0.006340 / 0.004328 (0.002012) | 0.083105 / 0.004250 (0.078855) | 0.047002 / 0.037052 (0.009950) | 0.458564 / 0.258489 (0.200075) | 0.513704 / 0.293841 (0.219863) | 0.041359 / 0.128546 (-0.087188) | 0.014515 / 0.075646 (-0.061131) | 0.392599 / 0.419271 (-0.026673) | 0.055222 / 0.043533 (0.011690) | 0.446956 / 0.255139 (0.191817) | 0.469194 / 0.283200 (0.185994) | 0.118212 / 0.141683 (-0.023471) | 1.682647 / 1.452155 (0.230492) | 1.780076 / 1.492716 (0.287360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259124 / 0.018006 (0.241117) | 0.507559 / 0.000490 (0.507069) | 0.001080 / 0.000200 (0.000880) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031969 / 0.037411 (-0.005442) | 0.126997 / 0.014526 (0.112471) | 0.139593 / 0.176557 (-0.036963) | 0.182735 / 0.737135 (-0.554400) | 0.145871 / 0.296338 (-0.150468) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.530894 / 0.215209 (0.315685) | 5.284979 / 2.077655 (3.207324) | 2.592886 / 1.504120 (1.088766) | 2.407202 / 1.541195 (0.866007) | 2.434079 / 1.468490 (0.965589) | 0.829382 / 4.584777 (-3.755395) | 4.481710 / 3.745712 (0.735998) | 3.912280 / 5.269862 (-1.357581) | 1.962291 / 4.565676 (-2.603386) | 0.101840 / 0.424275 (-0.322435) | 0.014528 / 0.007607 (0.006921) | 0.639956 / 0.226044 (0.413911) | 6.414685 / 2.268929 (4.145756) | 3.240290 / 55.444624 (-52.204334) | 2.795208 / 6.876477 (-4.081269) | 2.912122 / 2.142072 (0.770050) | 0.992188 / 4.805227 (-3.813039) | 0.200701 / 6.500664 (-6.299964) | 0.074235 / 0.075469 (-0.001234) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455075 / 1.841788 (-0.386712) | 17.186669 / 8.074308 (9.112361) | 15.404357 / 10.191392 (5.212965) | 0.168267 / 0.680424 (-0.512157) | 0.020774 / 0.534201 (-0.513427) | 0.502603 / 0.579283 (-0.076680) | 0.506500 / 0.434364 (0.072136) | 0.624245 / 0.540337 (0.083907) | 0.735529 / 1.386936 (-0.651407) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n"
] | "2022-12-15T12:38:05Z" | "2023-01-25T15:49:25Z" | null | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5364",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5364"
} | Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5364/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3931/comments | https://api.github.com/repos/huggingface/datasets/issues/3931/events | https://github.com/huggingface/datasets/pull/3931 | 1,170,097,208 | PR_kwDODunzps40fBjx | 3,931 | Add align_labels_with_mapping docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-15T19:24:57Z" | "2022-03-18T16:28:31Z" | "2022-03-18T16:24:33Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3931",
"merged_at": "2022-03-18T16:24:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3931"
} | This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko 🎉 ).
For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3931/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1650/comments | https://api.github.com/repos/huggingface/datasets/issues/1650/events | https://github.com/huggingface/datasets/pull/1650 | 775,545,912 | MDExOlB1bGxSZXF1ZXN0NTQ2MjA0MzYy | 1,650 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4",
"events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}",
"followers_url": "https://api.github.com/users/MisbahKhan789/followers",
"following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}",
"gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MisbahKhan789",
"id": 15351802,
"login": "MisbahKhan789",
"node_id": "MDQ6VXNlcjE1MzUxODAy",
"organizations_url": "https://api.github.com/users/MisbahKhan789/orgs",
"received_events_url": "https://api.github.com/users/MisbahKhan789/received_events",
"repos_url": "https://api.github.com/users/MisbahKhan789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MisbahKhan789"
} | [] | closed | false | null | [] | null | [] | "2020-12-28T19:09:05Z" | "2020-12-29T10:43:14Z" | "2020-12-29T10:43:14Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1650.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1650",
"merged_at": "2020-12-29T10:43:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1650.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1650"
} | added dataset summary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1650/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5196/comments | https://api.github.com/repos/huggingface/datasets/issues/5196/events | https://github.com/huggingface/datasets/pull/5196 | 1,434,401,646 | PR_kwDODunzps5CH439 | 5,196 | Use hfh hf_hub_url function | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5196). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq I think we should first agree if `datasets` can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: some users may have override this.\r\n\r\nIf so, I then would suggest to initiate a deprecation cycle.",
"After a discussion with the rest of the datasets team, we agreed we can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: this will have minimal impact, only for **private Hubs**. We will address eventual possible impacts in the future.\r\n\r\nAdditionally, we also ignore `config.HUB_DEFAULT_VERSION`.\r\n\r\nSee explanation in this PR description: https://github.com/huggingface/datasets/pull/5196#issue-1434401646",
"I'm trying to upgrade datasets to 2.7.0 in https://github.com/huggingface/datasets-server, and the tests fail due to this change. I think it's a breaking change (that was not listed in https://github.com/huggingface/datasets/releases/tag/2.7.0) since code that previously worked (by setting `datasets.config.HUB_DATASETS_URL = CI_HUB_DATASETS_URL` for example) does not work anymore.\r\n\r\nI'm not sure what is the correct way to set up the tests; besides setting the env var \"HF_ENDPOINT\" before launching the tests (which, I think, is not a good way to do: the tests should not depend on the environment).",
"OK, I re-read this thread, and https://github.com/huggingface/datasets/pull/5196#issuecomment-1307430175 explicitely states that `config.HUB_DATASETS_URL` (as well as `config.HUB_DEFAULT_VERSION`) is now ignored. I was expecting the breaking changes to be listed in the release notes: https://github.com/huggingface/datasets/releases/tag/2.7.0.",
"> I'm not sure what is the correct way to set up the tests; besides setting the env var \"HF_ENDPOINT\" before launching the tests (which, I think, is not a good way to do: the tests should not depend on the environment).\r\n\r\nI think the current workaround of settings an env variable before launching the tests is \"not so bad\" when considering the fact that env variables are evaluated at import time in `huggingface_hub` (and most probable `datasets` as well). I think that when refactoring this in huggingface_hub (https://github.com/huggingface/huggingface_hub/issues/1172) I'll opt for instantiating a `Settings` object (or `Constants`) that contains all the settings variables. This way it will not be possible to import attributes individually + tests would be easier. As I see it, it would be similar to [what `Pydantic` does](https://pydantic-docs.helpmanual.io/usage/settings/) even though we most probably don't want Pydantic as a root dependency just for that. ",
"You can use fixtures in your tests:\r\n```python\r\nCI_HUB_ENDPOINT = \"https://hub-ci.huggingface.co\"\r\nCI_HUB_DATASETS_URL = CI_HUB_ENDPOINT + \"/datasets/{repo_id}/resolve/{revision}/{path}\"\r\nCI_HFH_HUGGINGFACE_CO_URL_TEMPLATE = CI_HUB_ENDPOINT + \"/{repo_id}/resolve/{revision}/{filename}\"\r\n\r\n@pytest.fixture\r\ndef ci_hfh_hf_hub_url(monkeypatch):\r\n monkeypatch.setattr(\r\n \"huggingface_hub.file_download.HUGGINGFACE_CO_URL_TEMPLATE\", CI_HFH_HUGGINGFACE_CO_URL_TEMPLATE\r\n )\r\n\r\n@pytest.fixture\r\ndef ci_hub_config(monkeypatch):\r\n monkeypatch.setattr(\"datasets.config.HF_ENDPOINT\", CI_HUB_ENDPOINT)\r\n monkeypatch.setattr(\"datasets.config.HUB_DATASETS_URL\", CI_HUB_DATASETS_URL)\r\n```\r\n\r\nand use `@pytest.fixture(autouse=True)` if you want to always use the CI endpoints.\r\n\r\nAnd when `huggingface-hub` and `datasets` change the way we can set the endpoint, we'll just need to update the fixtures.\r\nI think ultimately you'll only have to change the `huggingface-hub` endpoint settings\r\n",
"OK.\r\n\r\nIn fact, in datasets-server we set `config.HUB_DATASETS_URL` (https://github.com/huggingface/datasets-server/blob/35a30dbcd687b26db1f02502ea8305f70c064473/workers/splits/src/splits/config.py#L26) at config time, before starting the workers. It's not an issue with how to launch the tests, but with the app in itself.\r\n\r\nI understand that for now, the only way to fix this is to setup `HF_ENDPOINT` in the env when launching the app (currently, we set the endpoint with `COMMON_HF_ENDPOINT`, a custom env var I set to be sure not to have side-effects)",
"> You can use fixtures in your tests:\r\n\r\nThanks, used in https://github.com/huggingface/datasets-server/pull/644."
] | "2022-11-03T10:08:09Z" | "2022-12-06T11:38:17Z" | "2022-11-09T07:15:12Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5196.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5196",
"merged_at": "2022-11-09T07:15:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5196.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5196"
} | Small refactoring to use `hf_hub_url` function from `huggingface_hub`.
This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`.
This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood).
EDIT:
~~Finally, we use our `config.HUB_DATASETS_URL` when using `hfh.hf_hub_url`~~
There is a breaking change: the `hfh` `hf_hub_url` function uses
- `hfh` `HUGGINGFACE_CO_URL_TEMPLATE` URL template, different from the `datasets` `config.HUB_DATASETS_URL`
- also, `hfh` `DEFAULT_REVISION`, instead of `datasets` `config.HUB_DEFAULT_VERSION` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5196/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5196/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4057/comments | https://api.github.com/repos/huggingface/datasets/issues/4057/events | https://github.com/huggingface/datasets/issues/4057 | 1,185,442,001 | I_kwDODunzps5GqGjR | 4,057 | `load_dataset` consumes too much memory for audio + tar archives | {
"avatar_url": "https://avatars.githubusercontent.com/u/50839826?v=4",
"events_url": "https://api.github.com/users/JFCeron/events{/privacy}",
"followers_url": "https://api.github.com/users/JFCeron/followers",
"following_url": "https://api.github.com/users/JFCeron/following{/other_user}",
"gists_url": "https://api.github.com/users/JFCeron/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JFCeron",
"id": 50839826,
"login": "JFCeron",
"node_id": "MDQ6VXNlcjUwODM5ODI2",
"organizations_url": "https://api.github.com/users/JFCeron/orgs",
"received_events_url": "https://api.github.com/users/JFCeron/received_events",
"repos_url": "https://api.github.com/users/JFCeron/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JFCeron/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JFCeron/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JFCeron"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! Could it be because you need to free the memory used by `tarfile` by emptying the tar `members` by any chance ?\r\n```python\r\n yield key, {\"audio\": {\"path\": audio_name, \"bytes\": audio_file_obj.read()}}\r\n audio_tarfile.members = [] # free memory\r\n key += 1\r\n```\r\n\r\nand then you can set `DEFAULT_WRITER_BATCH_SIZE` to whatever value makes more sense for your dataset.\r\n\r\nLet me know if the issue persists (which could happen, given that you managed to run your generator without RAM issues and using os.walk didn't solve the issue)",
"Thanks for your reply! Tried it but the issue persists. ",
"I also run out of memory when loading `mozilla-foundation/common_voice_8_0` that also uses `tarfile` via `dl_manager.iter_archive`. There seems to be some data files that stay in memory somewhere\r\n\r\nI don't have the issue with other compression formats like gzipped files",
"I'm facing a similar memory leak issue when loading cv8. As you said @lhoestq \r\n\r\n`load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)`\r\n\r\nThis issue is happening on a 32GB RAM machine. \r\n\r\nAny updates on how to fix this?",
"I've run a memory profiler to see where's the leak comes from:\r\n\r\n![image](https://user-images.githubusercontent.com/5097052/165101712-e7060ae5-77b2-4f6a-92bd-2996dbd60b36.png)\r\n\r\n... it seems that it's related to the tarfile lib buffer reader. But I don't know why it's only happening on the huggingface script",
"I have the same problem when loading video into numpy. \r\n```\r\nyield id,{ \r\n \"video\": imageio.v3.imread(video_path),\r\n \"label\": int(label)\r\n}\r\n```\r\nSince video files are heavy, it can only processes a dozen samples before OOM.",
"For video datasets I think you can just define the max number of video that can stay in memory by adding this class attribute to your dataset builer:\r\n```py\r\nDEFAULT_WRITER_BATCH_SIZE = 8 # only 8 videos at a time in memory before flushing the dataset writer\r\n```",
"same thing happens for me with `load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True, writer_batch_size=1)` on azure ml. seems to fill up `tmp` and not release that memory until OOM",
"I'll add that I'm encountering the same issue with\r\n`load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\nSame for `'es'` in place of `'ceb'`.",
"> I'll add that I'm encountering the same issue with\r\n> load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train').\r\n> Same for 'es' in place of 'ceb'.\r\n\r\nThis is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam",
"> > I'll add that I'm encountering the same issue with\r\n> > `load_dataset('wikipedia', 'ceb', runner='DirectRunner', split='train')`.\r\n> > Same for `'es'` in place of `'ceb'`.\r\n> \r\n> This is because the Apache Beam `DirectRunner` runs with the full data in memory unfortunately. Optimizing the `DirectRunner` is not in the scope of the `datasets` library, but rather in the Apache Beam project I believe. If you have memory issues with the `DirectRunner`, please consider switching to a machine with more RAM, or to distributed processing runtimes like Spark, Flink or DataFlow. There is a bit of documentation here: https://huggingface.co/docs/datasets/beam\r\n\r\nFair enough, but this line of code crashed an AWS instance with 1024GB of RAM! I have also tried with `Runner='Flink'` on an environment with 51GB of RAM, which also failed.\r\n\r\nApache Beam has tons of open tickets already - is it worth submitting one to them over this?",
"> Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n\r\nWhat, wikipedia is not even bigger than 20GB\r\n\r\ncc @albertvillanova",
"> > Fair enough, but this line of code crashed an AWS instance with 1024GB of RAM!\r\n> \r\n> What, wikipedia is not even bigger than 20GB\r\n> \r\n> cc @albertvillanova\r\n\r\nLuckily, on Colab you can watch the call stack at the bottom of the screen - much of the time and space complexity seems to come from `_parse_and_clean_wikicode()` rather than the actual download process. As far as I can tell, the script is loading the full dataset and then cleaning it all at once, which is consuming a lot of memory.",
"I think we are mixing many different bugs in this Issue page:\r\n- TAR archive with audio files\r\n- video file\r\n- distributed parsing of Wikipedia using Apache Beam\r\n\r\n@dan-the-meme-man may I ask you to open a separate Issue for your problem? Then I will address it. It is important to fix it because we are currently working on a Datasets enhancement to be able to provide all Wikipedias already preprocessed.\r\n\r\nOn the other hand, I think we could keep this Issue page for the original problem: TAR archive with audio files. That is not fixed yet either.",
"Is there an update on the TAR archive issue with audio files? Happy to lend a hand in fixing this :)",
"I found the issue with Common Voice 8 and opened a PR to fix it: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0/discussions/2\r\n\r\nBasically the `metadata` dict that contains the transcripts per audio file was continuously getting filled with bytes from `f.read()` because of this code:\r\n```python\r\nresult = metadata[path]\r\nresult[\"audio\"] = {\"path\": path, \"bytes\": f.read()}\r\n```\r\ncopying the result with `result = dict(metadata[path])` fixes it: the bytes are no longer added to `metadata`\r\n\r\nI also opened PRs to the other CV datasets",
"Amazing, that's a great find! Thanks @lhoestq!",
"I'm closing this one for now, but feel free to reopen if you encounter other memory issues with audio datasets"
] | "2022-03-29T21:38:55Z" | "2022-08-16T10:22:55Z" | "2022-08-16T10:22:55Z" | NONE | null | null | null |
## Description
`load_dataset` consumes more and more memory until it's killed, even though it's made with a generator. I'm adding a loading script for a new dataset, made up of ~15s audio coming from a tar file. Tried setting `DEFAULT_WRITER_BATCH_SIZE = 1` as per the discussion in #741 but the problem persists.
## Steps to reproduce the bug
Here's my implementation of `_generate_examples`:
```python
class MyDatasetBuilder(datasets.GeneratorBasedBuilder):
DEFAULT_WRITER_BATCH_SIZE = 1
...
def _split_generators(self, dl_manager):
archive_path = dl_manager.download(_DL_URLS[self.config.name])
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"audio_tarfile_path": archive_path["audio_tarfile"]
},
),
]
def _generate_examples(self, audio_tarfile_path):
key = 0
with tarfile.open(audio_tarfile_path, mode="r|") as audio_tarfile:
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
```
I then try to load via `ds = load_dataset('./datasets/my_new_dataset', writer_batch_size=1)`, and memory usage grows until all 8GB of my machine are taken and process is killed (`Killed`). Also tried an untarred version of this using `os.walk` but the same happened.
I created a script to confirm that one can safely go through such a generator, which runs just fine with memory <500MB at all times.
```python
import tarfile
def generate_examples():
audio_tarfile = tarfile.open("audios.tar", mode="r|")
key = 0
for audio_tarinfo in audio_tarfile:
audio_name = audio_tarinfo.name
audio_file_obj = audio_tarfile.extractfile(audio_tarinfo)
yield key, {"audio": {"path": audio_name, "bytes": audio_file_obj.read()}}
key += 1
if __name__ == "__main__":
examples = generate_examples()
for example in examples:
pass
```
## Expected results
Memory consumption should be similar to the non-huggingface script.
## Actual results
Process is killed after consuming too much memory.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 6.0.1
- Pandas version: 1.3.5 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4057/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2927/comments | https://api.github.com/repos/huggingface/datasets/issues/2927/events | https://github.com/huggingface/datasets/issues/2927 | 997,654,680 | I_kwDODunzps47dwCY | 2,927 | Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timothyjlaurent",
"id": 2000204,
"login": "timothyjlaurent",
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timothyjlaurent"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"Thanks for reporting, I'm looking into it :)",
"Fixed by #2950."
] | "2021-09-16T01:14:02Z" | "2021-09-20T16:23:22Z" | "2021-09-20T16:23:21Z" | NONE | null | null | null | ## Describe the bug
Upgrading to 1.12 caused `dataset.filter` call to fail with
> get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels
## Steps to reproduce the bug
```pythondef
filter_good_rows(
ex: Dict,
valid_rel_labels: Set[str],
valid_ner_labels: Set[str],
tokenizer: PreTrainedTokenizerFast,
) -> bool:
"""Get the good rows"""
encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer)
ex["encoding"] = encoding
for relation in ex["relations"]:
if not is_valid_relation(relation, valid_rel_labels):
return False
for span in ex["spans"]:
if not is_valid_span(span, valid_ner_labels, encoding):
return False
return True
def get_dataset():
loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py")
ds = load_dataset(
loader_path,
name="prodigy-dataset",
data_files=sorted(file_paths),
cache_dir=cache_dir,
)["train"]
valid_ner_labels = set(vocab.ner_category)
valid_relations = set(vocab.relation_types.keys())
ds = ds.filter(
filter_good_rows,
fn_kwargs=dict(
valid_rel_labels=valid_relations,
valid_ner_labels=valid_ner_labels,
tokenizer=vocab.tokenizer,
),
keep_in_memory=True,
num_proc=num_proc,
)
```
`ds` is a `DatasetDict` produced by a jsonl dataset.
This runs fine on 1.11 but fails on 1.12
**Stack Trace**
## Expected results
I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11
## Actual results
```
tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl
ds = ds.filter(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper
out = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter
indices = self.map(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map
return self._map_single(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper
out = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single
batch = apply_function_on_filtered_inputs(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...}
indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0
def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0):
"""Utility to apply the function on a selection of columns."""
nonlocal update_data
fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
if offset == 0:
effective_indices = indices
else:
effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
processed_inputs = (
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
)
E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels'
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Mac
- Python version: 3.8.9
- PyArrow version: pyarrow==5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2927/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5815/comments | https://api.github.com/repos/huggingface/datasets/issues/5815/events | https://github.com/huggingface/datasets/issues/5815 | 1,693,701,743 | I_kwDODunzps5k89Zv | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/5355286?v=4",
"events_url": "https://api.github.com/users/hrbigelow/events{/privacy}",
"followers_url": "https://api.github.com/users/hrbigelow/followers",
"following_url": "https://api.github.com/users/hrbigelow/following{/other_user}",
"gists_url": "https://api.github.com/users/hrbigelow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hrbigelow",
"id": 5355286,
"login": "hrbigelow",
"node_id": "MDQ6VXNlcjUzNTUyODY=",
"organizations_url": "https://api.github.com/users/hrbigelow/orgs",
"received_events_url": "https://api.github.com/users/hrbigelow/received_events",
"repos_url": "https://api.github.com/users/hrbigelow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hrbigelow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hrbigelow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hrbigelow"
} | [] | open | false | null | [] | null | [
"Hi @hrbigelow , I'm no expert for such a question so I'll ping @lhoestq from the `datasets` library (also this issue could be moved there if someone with permission can do it :) )",
"Hi ! Many datasets are made of several files, and how they are parsed often requires a python script. Because of that, datasets like wmt14 are not available as a single file on HF. Though you can create this file using `datasets`:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"wmt14\", \"de-en\", split=\"train\")\r\n\r\nds.to_json(\"wmt14-train.json\")\r\n# OR to parquet, which is compressed:\r\n# ds.to_parquet(\"wmt14-train.parquet\")\r\n```\r\n\r\nWe are also working on providing parquet exports for all datasets, but wmt14 is not supported yet (we're rolling it out for datasets <1GB first). They're usually available in the `refs/convert/parquet` branch (empty for wmt14):\r\n\r\n<img width=\"267\" alt=\"image\" src=\"https://user-images.githubusercontent.com/42851186/235878909-7339f5a4-be19-4ada-85d8-8a50d23acf35.png\">\r\n",
"also cc @nateraw for visibility on this (and cc @osanseviero too)",
"I've requested support for creating a Kaggle dataset from an imported HF dataset repo on their \"forum\" here: https://www.kaggle.com/discussions/product-feedback/427142 (upvotes appreciated 🙂)"
] | "2023-05-02T21:43:33Z" | "2023-07-26T16:13:31Z" | null | NONE | null | null | null | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:
![image](https://user-images.githubusercontent.com/5355286/235792394-7c559d07-4aff-45b7-ad2b-9c5280c88415.png)
Is there some mechanism from huggingface to represent a dataset (such as that from `load_dataset('wmt14', 'de-en', split='train')` as a single file? Or, some other way to get that into a Kaggle dataset so that I can use the huggingface `datasets` module to process and consume it inside of a Kaggle notebook?
Thanks in advance!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5815/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4670/comments | https://api.github.com/repos/huggingface/datasets/issues/4670/events | https://github.com/huggingface/datasets/issues/4670 | 1,299,984,246 | I_kwDODunzps5NfC92 | 4,670 | Can't extract files from `.7z` zipfile using `download_and_extract` | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhavitvyamalik",
"id": 19718818,
"login": "bhavitvyamalik",
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhavitvyamalik"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce52/datasets/samsum/samsum.py#L106-L110\r\n",
"Related to this issue: https://github.com/huggingface/datasets/issues/3541",
"Sure, let me look into and check what can be done. Will keep you guys updated here!",
"Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesn’t work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?",
"Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. "
] | "2022-07-10T18:16:49Z" | "2022-07-15T13:02:07Z" | "2022-07-15T13:02:07Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration default
Downloading and preparing dataset mantis/default to /Users/bhavitvyamalik/.cache/huggingface/datasets/mantis/default/1.1.0/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4...
Downloading data: 100%|█████████████████████████████████████████████████████████| 77.2M/77.2M [00:23<00:00, 3.28MB/s]
/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/load.py", line 1745, in load_dataset
use_auth_token=use_auth_token,
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6/merged_train.json'
```
just before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`?
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.8
- PyArrow version: 5.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4670/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4151/comments | https://api.github.com/repos/huggingface/datasets/issues/4151/events | https://github.com/huggingface/datasets/pull/4151 | 1,201,837,999 | PR_kwDODunzps42GgLu | 4,151 | Add missing label for emotion description | {
"avatar_url": "https://avatars.githubusercontent.com/u/44396506?v=4",
"events_url": "https://api.github.com/users/lijiazheng99/events{/privacy}",
"followers_url": "https://api.github.com/users/lijiazheng99/followers",
"following_url": "https://api.github.com/users/lijiazheng99/following{/other_user}",
"gists_url": "https://api.github.com/users/lijiazheng99/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lijiazheng99",
"id": 44396506,
"login": "lijiazheng99",
"node_id": "MDQ6VXNlcjQ0Mzk2NTA2",
"organizations_url": "https://api.github.com/users/lijiazheng99/orgs",
"received_events_url": "https://api.github.com/users/lijiazheng99/received_events",
"repos_url": "https://api.github.com/users/lijiazheng99/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lijiazheng99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lijiazheng99/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lijiazheng99"
} | [] | closed | false | null | [] | null | [] | "2022-04-12T13:17:37Z" | "2022-04-12T13:58:50Z" | "2022-04-12T13:58:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4151",
"merged_at": "2022-04-12T13:58:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4151"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4151/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4194/comments | https://api.github.com/repos/huggingface/datasets/issues/4194/events | https://github.com/huggingface/datasets/pull/4194 | 1,210,958,602 | PR_kwDODunzps42jjD3 | 4,194 | Support lists of multi-dimensional numpy arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-21T12:22:26Z" | "2022-05-12T15:16:34Z" | "2022-05-12T15:08:40Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"merged_at": "2022-05-12T15:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4194"
} | Fix #4191.
CC: @SaulLu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4194/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6191/comments | https://api.github.com/repos/huggingface/datasets/issues/6191/events | https://github.com/huggingface/datasets/pull/6191 | 1,871,634,840 | PR_kwDODunzps5ZCKmv | 6,191 | Add missing `revision` argument | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I have found the same issue. Good fix. Should be merged as soon as possible.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006258 / 0.011353 (-0.005095) | 0.003717 / 0.011008 (-0.007291) | 0.079444 / 0.038508 (0.040936) | 0.066318 / 0.023109 (0.043209) | 0.310129 / 0.275898 (0.034231) | 0.346948 / 0.323480 (0.023469) | 0.003505 / 0.007986 (-0.004480) | 0.002855 / 0.004328 (-0.001474) | 0.062447 / 0.004250 (0.058197) | 0.050191 / 0.037052 (0.013139) | 0.314550 / 0.258489 (0.056061) | 0.357883 / 0.293841 (0.064042) | 0.027754 / 0.128546 (-0.100792) | 0.008068 / 0.075646 (-0.067578) | 0.262170 / 0.419271 (-0.157102) | 0.045834 / 0.043533 (0.002301) | 0.306938 / 0.255139 (0.051799) | 0.339229 / 0.283200 (0.056030) | 0.021188 / 0.141683 (-0.120495) | 1.430904 / 1.452155 (-0.021251) | 1.542038 / 1.492716 (0.049321) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201232 / 0.018006 (0.183226) | 0.432848 / 0.000490 (0.432358) | 0.002403 / 0.000200 (0.002203) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024068 / 0.037411 (-0.013344) | 0.074077 / 0.014526 (0.059551) | 0.083578 / 0.176557 (-0.092978) | 0.144497 / 0.737135 (-0.592638) | 0.085386 / 0.296338 (-0.210952) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397912 / 0.215209 (0.182702) | 3.940953 / 2.077655 (1.863299) | 1.935914 / 1.504120 (0.431794) | 1.753688 / 1.541195 (0.212493) | 1.832916 / 1.468490 (0.364426) | 0.503320 / 4.584777 (-4.081457) | 3.068693 / 3.745712 (-0.677019) | 2.867543 / 5.269862 (-2.402318) | 1.876265 / 4.565676 (-2.689412) | 0.057234 / 0.424275 (-0.367041) | 0.006753 / 0.007607 (-0.000854) | 0.468456 / 0.226044 (0.242411) | 4.681671 / 2.268929 (2.412742) | 2.445141 / 55.444624 (-52.999483) | 2.182366 / 6.876477 (-4.694110) | 2.399365 / 2.142072 (0.257293) | 0.591880 / 4.805227 (-4.213347) | 0.126176 / 6.500664 (-6.374488) | 0.061488 / 0.075469 (-0.013982) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244013 / 1.841788 (-0.597775) | 18.534720 / 8.074308 (10.460412) | 13.853267 / 10.191392 (3.661875) | 0.154167 / 0.680424 (-0.526257) | 0.016685 / 0.534201 (-0.517515) | 0.331044 / 0.579283 (-0.248239) | 0.341399 / 0.434364 (-0.092965) | 0.378878 / 0.540337 (-0.161459) | 0.535707 / 1.386936 (-0.851230) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006284 / 0.011353 (-0.005069) | 0.003707 / 0.011008 (-0.007301) | 0.062481 / 0.038508 (0.023973) | 0.063342 / 0.023109 (0.040233) | 0.445465 / 0.275898 (0.169567) | 0.482021 / 0.323480 (0.158541) | 0.004909 / 0.007986 (-0.003076) | 0.002908 / 0.004328 (-0.001420) | 0.063111 / 0.004250 (0.058860) | 0.050197 / 0.037052 (0.013145) | 0.453367 / 0.258489 (0.194878) | 0.485249 / 0.293841 (0.191408) | 0.028532 / 0.128546 (-0.100014) | 0.008157 / 0.075646 (-0.067490) | 0.068033 / 0.419271 (-0.351238) | 0.041093 / 0.043533 (-0.002440) | 0.446555 / 0.255139 (0.191416) | 0.469103 / 0.283200 (0.185904) | 0.019529 / 0.141683 (-0.122154) | 1.503135 / 1.452155 (0.050980) | 1.545819 / 1.492716 (0.053103) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257274 / 0.018006 (0.239268) | 0.418643 / 0.000490 (0.418153) | 0.011604 / 0.000200 (0.011405) | 0.000094 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026286 / 0.037411 (-0.011125) | 0.082459 / 0.014526 (0.067933) | 0.090007 / 0.176557 (-0.086550) | 0.144963 / 0.737135 (-0.592173) | 0.093236 / 0.296338 (-0.203102) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456331 / 0.215209 (0.241122) | 4.559469 / 2.077655 (2.481814) | 2.503452 / 1.504120 (0.999333) | 2.326579 / 1.541195 (0.785384) | 2.387551 / 1.468490 (0.919061) | 0.508683 / 4.584777 (-4.076094) | 3.071293 / 3.745712 (-0.674419) | 2.872820 / 5.269862 (-2.397041) | 1.891674 / 4.565676 (-2.674003) | 0.058951 / 0.424275 (-0.365324) | 0.006493 / 0.007607 (-0.001114) | 0.526747 / 0.226044 (0.300703) | 5.279985 / 2.268929 (3.011057) | 2.986146 / 55.444624 (-52.458478) | 2.603462 / 6.876477 (-4.273015) | 2.766776 / 2.142072 (0.624704) | 0.594685 / 4.805227 (-4.210542) | 0.125174 / 6.500664 (-6.375490) | 0.061430 / 0.075469 (-0.014039) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350012 / 1.841788 (-0.491776) | 18.991941 / 8.074308 (10.917633) | 14.903483 / 10.191392 (4.712091) | 0.145918 / 0.680424 (-0.534506) | 0.017766 / 0.534201 (-0.516435) | 0.335350 / 0.579283 (-0.243933) | 0.357936 / 0.434364 (-0.076428) | 0.392355 / 0.540337 (-0.147983) | 0.545787 / 1.386936 (-0.841149) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#439e115d34a2d8737af719660c1b586ac32279dc \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005927 / 0.011353 (-0.005426) | 0.003497 / 0.011008 (-0.007512) | 0.079802 / 0.038508 (0.041294) | 0.058994 / 0.023109 (0.035885) | 0.309349 / 0.275898 (0.033451) | 0.344876 / 0.323480 (0.021396) | 0.004631 / 0.007986 (-0.003354) | 0.002814 / 0.004328 (-0.001515) | 0.062228 / 0.004250 (0.057978) | 0.046001 / 0.037052 (0.008949) | 0.312196 / 0.258489 (0.053707) | 0.356283 / 0.293841 (0.062442) | 0.027264 / 0.128546 (-0.101282) | 0.007992 / 0.075646 (-0.067654) | 0.260746 / 0.419271 (-0.158526) | 0.045112 / 0.043533 (0.001579) | 0.310463 / 0.255139 (0.055324) | 0.336456 / 0.283200 (0.053256) | 0.020364 / 0.141683 (-0.121319) | 1.482159 / 1.452155 (0.030005) | 1.541586 / 1.492716 (0.048870) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185035 / 0.018006 (0.167028) | 0.432104 / 0.000490 (0.431615) | 0.002911 / 0.000200 (0.002711) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023674 / 0.037411 (-0.013737) | 0.072462 / 0.014526 (0.057936) | 0.080154 / 0.176557 (-0.096402) | 0.143022 / 0.737135 (-0.594114) | 0.082909 / 0.296338 (-0.213430) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436977 / 0.215209 (0.221768) | 4.359633 / 2.077655 (2.281979) | 2.321479 / 1.504120 (0.817359) | 2.115277 / 1.541195 (0.574082) | 2.172303 / 1.468490 (0.703813) | 0.495735 / 4.584777 (-4.089042) | 3.006773 / 3.745712 (-0.738939) | 2.866560 / 5.269862 (-2.403302) | 1.839339 / 4.565676 (-2.726337) | 0.056925 / 0.424275 (-0.367350) | 0.006777 / 0.007607 (-0.000830) | 0.507217 / 0.226044 (0.281172) | 5.064933 / 2.268929 (2.796004) | 2.737542 / 55.444624 (-52.707082) | 2.386227 / 6.876477 (-4.490250) | 2.566375 / 2.142072 (0.424302) | 0.582965 / 4.805227 (-4.222262) | 0.124715 / 6.500664 (-6.375949) | 0.061560 / 0.075469 (-0.013909) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295684 / 1.841788 (-0.546103) | 18.178345 / 8.074308 (10.104037) | 13.795886 / 10.191392 (3.604494) | 0.131464 / 0.680424 (-0.548960) | 0.016808 / 0.534201 (-0.517393) | 0.334190 / 0.579283 (-0.245093) | 0.347358 / 0.434364 (-0.087006) | 0.386198 / 0.540337 (-0.154139) | 0.527807 / 1.386936 (-0.859129) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006118 / 0.011353 (-0.005235) | 0.003634 / 0.011008 (-0.007374) | 0.062117 / 0.038508 (0.023609) | 0.061407 / 0.023109 (0.038298) | 0.448047 / 0.275898 (0.172149) | 0.483382 / 0.323480 (0.159902) | 0.004849 / 0.007986 (-0.003137) | 0.002859 / 0.004328 (-0.001469) | 0.061714 / 0.004250 (0.057463) | 0.047959 / 0.037052 (0.010907) | 0.452038 / 0.258489 (0.193549) | 0.485206 / 0.293841 (0.191365) | 0.028254 / 0.128546 (-0.100292) | 0.008055 / 0.075646 (-0.067591) | 0.067752 / 0.419271 (-0.351519) | 0.040355 / 0.043533 (-0.003178) | 0.446986 / 0.255139 (0.191847) | 0.472554 / 0.283200 (0.189354) | 0.019461 / 0.141683 (-0.122222) | 1.459048 / 1.452155 (0.006893) | 1.497283 / 1.492716 (0.004566) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241788 / 0.018006 (0.223782) | 0.457352 / 0.000490 (0.456862) | 0.003841 / 0.000200 (0.003641) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026429 / 0.037411 (-0.010982) | 0.081604 / 0.014526 (0.067078) | 0.092881 / 0.176557 (-0.083675) | 0.146057 / 0.737135 (-0.591078) | 0.092987 / 0.296338 (-0.203352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456641 / 0.215209 (0.241432) | 4.567853 / 2.077655 (2.490198) | 2.491684 / 1.504120 (0.987564) | 2.323647 / 1.541195 (0.782452) | 2.387689 / 1.468490 (0.919198) | 0.505114 / 4.584777 (-4.079663) | 3.071615 / 3.745712 (-0.674098) | 2.912391 / 5.269862 (-2.357471) | 1.922350 / 4.565676 (-2.643326) | 0.057785 / 0.424275 (-0.366490) | 0.006642 / 0.007607 (-0.000965) | 0.532463 / 0.226044 (0.306418) | 5.344084 / 2.268929 (3.075155) | 2.970182 / 55.444624 (-52.474442) | 2.601733 / 6.876477 (-4.274744) | 2.763803 / 2.142072 (0.621731) | 0.596333 / 4.805227 (-4.208894) | 0.127047 / 6.500664 (-6.373617) | 0.062516 / 0.075469 (-0.012953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.343206 / 1.841788 (-0.498581) | 19.405215 / 8.074308 (11.330907) | 15.406568 / 10.191392 (5.215176) | 0.132328 / 0.680424 (-0.548096) | 0.017882 / 0.534201 (-0.516318) | 0.336393 / 0.579283 (-0.242890) | 0.361989 / 0.434364 (-0.072375) | 0.394336 / 0.540337 (-0.146001) | 0.545166 / 1.386936 (-0.841770) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#439e115d34a2d8737af719660c1b586ac32279dc \"CML watermark\")\n"
] | "2023-08-29T13:05:04Z" | "2023-09-04T06:38:17Z" | "2023-08-31T13:50:00Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6191",
"merged_at": "2023-08-31T13:50:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6191"
} | I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6191/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1801/comments | https://api.github.com/repos/huggingface/datasets/issues/1801/events | https://github.com/huggingface/datasets/pull/1801 | 797,814,275 | MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw | 1,801 | [GEM] Updated the source link of the data to update correct tokenized version. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"events_url": "https://api.github.com/users/mounicam/events{/privacy}",
"followers_url": "https://api.github.com/users/mounicam/followers",
"following_url": "https://api.github.com/users/mounicam/following{/other_user}",
"gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mounicam",
"id": 11708999,
"login": "mounicam",
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"organizations_url": "https://api.github.com/users/mounicam/orgs",
"received_events_url": "https://api.github.com/users/mounicam/received_events",
"repos_url": "https://api.github.com/users/mounicam/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mounicam/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mounicam"
} | [] | closed | false | null | [] | null | [
"@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ",
"Closed by https://github.com/huggingface/datasets/pull/1807"
] | "2021-01-31T21:17:19Z" | "2021-02-02T13:17:38Z" | "2021-02-02T13:17:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1801.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1801",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1801.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1801"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1801/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1801/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2459/comments | https://api.github.com/repos/huggingface/datasets/issues/2459/events | https://github.com/huggingface/datasets/issues/2459 | 915,222,015 | MDU6SXNzdWU5MTUyMjIwMTU= | 2,459 | `Proto_qa` hosting seems to be broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"@VictorSanh , I think @mariosasko is already working on it. "
] | "2021-06-08T16:16:32Z" | "2021-06-10T08:31:09Z" | "2021-06-10T08:31:09Z" | MEMBER | null | null | null | ## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("proto_qa")
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators
train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download
num_proc=download_config.num_proc,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2459/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4505/comments | https://api.github.com/repos/huggingface/datasets/issues/4505/events | https://github.com/huggingface/datasets/pull/4505 | 1,272,477,226 | PR_kwDODunzps45uH-o | 4,505 | Fix double dots in data files | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)"
] | "2022-06-15T16:31:04Z" | "2022-06-15T17:15:58Z" | "2022-06-15T17:05:53Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4505.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4505",
"merged_at": "2022-06-15T17:05:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4505.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4505"
} | As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)
I fixed this and added a test
cc @sgugger @ydshieh | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4505/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6182/comments | https://api.github.com/repos/huggingface/datasets/issues/6182/events | https://github.com/huggingface/datasets/issues/6182 | 1,867,203,131 | I_kwDODunzps5vS0I7 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4",
"events_url": "https://api.github.com/users/dsashulya/events{/privacy}",
"followers_url": "https://api.github.com/users/dsashulya/followers",
"following_url": "https://api.github.com/users/dsashulya/following{/other_user}",
"gists_url": "https://api.github.com/users/dsashulya/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dsashulya",
"id": 42322648,
"login": "dsashulya",
"node_id": "MDQ6VXNlcjQyMzIyNjQ4",
"organizations_url": "https://api.github.com/users/dsashulya/orgs",
"received_events_url": "https://api.github.com/users/dsashulya/received_events",
"repos_url": "https://api.github.com/users/dsashulya/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dsashulya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsashulya/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dsashulya"
} | [] | closed | false | null | [] | null | [
"Our minimal Python version requirement is 3.8, so we dropped `importlib_metadata`. \r\n\r\nFeel free to open a PR in the `evaluate` repo to replace the problematic import with\r\n```python\r\nif PY_VERSION < version.parse(\"3.8\"):\r\n import importlib_metadata\r\nelse:\r\n import importlib.metadata as importlib_metadata\r\n```",
"Any idea when you guys will release the next version which deals with this problem?\r\nI'm still having the same issue with py 3.10 when I install the lib with pip.\r\nI'm assuming that it has not yet been updated since the merge was 3 days ago.",
"Yes, this requires a new `evaluate` release (cc @lvwerra for this). \r\n\r\nIn the meantime, you can get the fixed version by installing `evaluate` from `main`: `pip install git+https://github.com/huggingface/evaluate.git`",
"I'll aim for a release this week!"
] | "2023-08-25T14:54:06Z" | "2023-09-04T16:41:11Z" | "2023-08-31T14:38:23Z" | NONE | null | null | null | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from datasets.config import importlib_metadata, version
ImportError: cannot import name 'importlib_metadata' from 'datasets.config' (<path_to_project>/venv/lib/python3.9/site-packages/datasets/config.py)
```
### Expected behavior
```datasets``` of v2.10 has the following workaround in ```config.py```:
```
if PY_VERSION < version.parse("3.8"):
import importlib_metadata
else:
import importlib.metadata as importlib_metadata
```
However, it's absent in v2.14 which might be the cause of the issue.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.9.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- Evaluate version: 0.4.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6182/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3957/comments | https://api.github.com/repos/huggingface/datasets/issues/3957/events | https://github.com/huggingface/datasets/pull/3957 | 1,172,401,455 | PR_kwDODunzps40magW | 3,957 | Fix xtreme s metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [
"Sorry for the commit history mess, but will be squashed anyways so should be fine",
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-17T13:39:04Z" | "2022-03-18T13:46:19Z" | "2022-03-18T13:42:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3957.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3957",
"merged_at": "2022-03-18T13:42:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3957.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3957"
} | We in fact do need BABEL in xtreme-s | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3957/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3957/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3677/comments | https://api.github.com/repos/huggingface/datasets/issues/3677/events | https://github.com/huggingface/datasets/issues/3677 | 1,123,192,866 | I_kwDODunzps5C8pAi | 3,677 | Discovery cannot be streamed anymore | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Seems like a regression from https://github.com/huggingface/datasets/pull/2843\r\n\r\nOr maybe it's an issue with the hosting. I don't think so, though, because https://www.dropbox.com/s/aox84z90nyyuikz/discovery.zip seems to work as expected\r\n\r\n",
"Hi @severo, thanks for reporting.\r\n\r\nSome servers do not support HTTP range requests, and those are required to stream some file formats (like ZIP in this case).\r\n\r\nLet me try to propose a workaround. "
] | "2022-02-03T15:02:03Z" | "2022-02-10T16:51:24Z" | "2022-02-10T16:51:24Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
iterable_dataset = load_dataset("discovery", name="discovery", split="train", streaming=True)
list(iterable_dataset.take(1))
```
## Expected results
The first row of the train split.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 365, in __iter__
for key, example in self._iter():
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 362, in _iter
yield from ex_iterable
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 272, in __iter__
yield from islice(self.ex_iterable, self.n)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 79, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/discovery/542fab7a9ddc1d9726160355f7baa06a1ccc44c40bc8e12c09e9bc743aca43a2/discovery.py", line 333, in _generate_examples
with open(data_file, encoding="utf8") as f:
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 64, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 369, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 456, in open
return open_files(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 288, in open_files
fs, fs_token, paths = get_fs_token_paths(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/core.py", line 611, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 253, in filesystem
return cls(**storage_options)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 68, in __call__
obj = super().__call__(*args, **kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__
self.zip = zipfile.ZipFile(self.fo)
File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1257, in __init__
self._RealGetContents()
File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 1320, in _RealGetContents
endrec = _EndRecData(fp)
File "/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/zipfile.py", line 263, in _EndRecData
fpin.seek(0, 2)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 676, in seek
raise ValueError("Cannot seek streaming HTTP file")
ValueError: Cannot seek streaming HTTP file
```
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-5.11.0-1027-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3677/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3677/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4904/comments | https://api.github.com/repos/huggingface/datasets/issues/4904/events | https://github.com/huggingface/datasets/pull/4904 | 1,353,002,837 | PR_kwDODunzps4959Ad | 4,904 | [LibriSpeech] Fix dev split local_extracted_archive for 'all' config | {
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sanchit-gandhi",
"id": 93869735,
"login": "sanchit-gandhi",
"node_id": "U_kgDOBZhWpw",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sanchit-gandhi"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR fixes a bug introduced in:\r\n- #4184"
] | "2022-08-27T10:04:57Z" | "2022-08-30T10:06:21Z" | "2022-08-30T10:03:25Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4904.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4904",
"merged_at": "2022-08-30T10:03:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4904.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4904"
} | We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61
These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`.
However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219
The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`.
When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263
Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`).
This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4904/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3938/comments | https://api.github.com/repos/huggingface/datasets/issues/3938/events | https://github.com/huggingface/datasets/pull/3938 | 1,170,875,417 | PR_kwDODunzps40hnjM | 3,938 | Avoid info log messages from transformers in FrugalScore metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3938). All of your documentation changes will be reflected on that endpoint."
] | "2022-03-16T11:11:29Z" | "2022-03-17T08:37:25Z" | "2022-03-17T08:37:24Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3938",
"merged_at": "2022-03-17T08:37:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3938"
} | Fix #3928. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3938/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | [] | null | [
"I have started working on adding this dataset.",
"Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https://github.com/huggingface/datasets/blob/66f2a7eece98d2778bd22bb5034cb7c2376032d4/datasets/arxiv_dataset/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https://huggingface.co/docs/datasets/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.",
"Let me know if you have any other questions",
"I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps://github.com/huggingface/datasets/pull/1886",
"Awesome! I left a longer comment on the PR :-)",
"I saw that this current datasets package holds common voice version 6.1, how to add the new version 7.0 that is already available?",
"Will me merged next week - we're working on it :-)",
"Common voice still appears to be a 6.1. Is the plan still to upgrade to 7.0?",
"We actually already have the code and everything ready to add Common Voice 7.0 to `datasets` but are still waiting for the common voice authors to give us the green light :-) \r\n\r\nAlso gently pinging @phirework and @milupo here",
"Common Voice 7.0 is available here now: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0",
"For anyone else stumbling upon this thread, the 8.0 version is also available now: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0"
] | "2021-02-08T13:21:05Z" | "2022-03-20T15:23:40Z" | "2021-03-15T05:56:21Z" | MEMBER | null | null | null | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5405/comments | https://api.github.com/repos/huggingface/datasets/issues/5405/events | https://github.com/huggingface/datasets/issues/5405 | 1,517,879,386 | I_kwDODunzps5aeQBa | 5,405 | size_in_bytes the same for all splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4",
"events_url": "https://api.github.com/users/Breakend/events{/privacy}",
"followers_url": "https://api.github.com/users/Breakend/followers",
"following_url": "https://api.github.com/users/Breakend/following{/other_user}",
"gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Breakend",
"id": 1609857,
"login": "Breakend",
"node_id": "MDQ6VXNlcjE2MDk4NTc=",
"organizations_url": "https://api.github.com/users/Breakend/orgs",
"received_events_url": "https://api.github.com/users/Breakend/received_events",
"repos_url": "https://api.github.com/users/Breakend/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Breakend/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Breakend"
} | [] | open | false | null | [] | null | [
"Hi @Breakend,\r\n\r\nIndeed, the attribute `size_in_bytes` refers to the size of the entire dataset configuration, for all splits (size of downloaded files + Arrow files), not the specific split.\r\nThis is also the case for `download_size` (downloaded files) and `dataset_size` (Arrow files).\r\n\r\nThe size of the Arrow files for a specific split can be accessed: e.g. size of the \"test\" split only\r\n```python\r\nds[\"train\"].info.splits[\"test\"].num_bytes\r\n```\r\n\r\nI agree this is confusing and maybe we should improve it."
] | "2023-01-03T20:25:48Z" | "2023-01-04T09:22:59Z" | null | NONE | null | null | null | ### Describe the bug
Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example:
```
>>> from datasets import load_dataset
>>> x = load_dataset("glue", "wnli")
Found cached dataset glue (/Users/breakend/.cache/huggingface/datasets/glue/wnli/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1097.70it/s]
>>> x["train"].size_in_bytes
186159
>>> x["validation"].size_in_bytes
186159
>>> x["test"].size_in_bytes
186159
>>>
```
### Steps to reproduce the bug
```
>>> from datasets import load_dataset
>>> x = load_dataset("glue", "wnli")
>>> x["train"].size_in_bytes
186159
>>> x["validation"].size_in_bytes
186159
>>> x["test"].size_in_bytes
186159
```
### Expected behavior
The expected behavior is that it should return the separate sizes for all splits.
### Environment info
- `datasets` version: 2.7.1
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5405/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2815/comments | https://api.github.com/repos/huggingface/datasets/issues/2815/events | https://github.com/huggingface/datasets/pull/2815 | 973,862,024 | MDExOlB1bGxSZXF1ZXN0NzE1MjUxNDQ5 | 2,815 | Tiny typo fixes of "fo" -> "of" | {
"avatar_url": "https://avatars.githubusercontent.com/u/9934829?v=4",
"events_url": "https://api.github.com/users/aronszanto/events{/privacy}",
"followers_url": "https://api.github.com/users/aronszanto/followers",
"following_url": "https://api.github.com/users/aronszanto/following{/other_user}",
"gists_url": "https://api.github.com/users/aronszanto/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aronszanto",
"id": 9934829,
"login": "aronszanto",
"node_id": "MDQ6VXNlcjk5MzQ4Mjk=",
"organizations_url": "https://api.github.com/users/aronszanto/orgs",
"received_events_url": "https://api.github.com/users/aronszanto/received_events",
"repos_url": "https://api.github.com/users/aronszanto/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aronszanto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aronszanto/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aronszanto"
} | [] | closed | false | null | [] | null | [] | "2021-08-18T16:36:11Z" | "2021-08-19T08:03:02Z" | "2021-08-19T08:03:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2815",
"merged_at": "2021-08-19T08:03:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2815"
} | Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2815/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2746/comments | https://api.github.com/repos/huggingface/datasets/issues/2746/events | https://github.com/huggingface/datasets/issues/2746 | 958,551,619 | MDU6SXNzdWU5NTg1NTE2MTk= | 2,746 | Cannot load `few-nerd` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4",
"events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}",
"followers_url": "https://api.github.com/users/Mehrad0711/followers",
"following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mehrad0711",
"id": 28717374,
"login": "Mehrad0711",
"node_id": "MDQ6VXNlcjI4NzE3Mzc0",
"organizations_url": "https://api.github.com/users/Mehrad0711/orgs",
"received_events_url": "https://api.github.com/users/Mehrad0711/received_events",
"repos_url": "https://api.github.com/users/Mehrad0711/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mehrad0711"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"/\"): we, the Hugging Face team, supervise their implementation and we make sure they work correctly by means of our test suite\r\n- community datasets (their identifier contains a slash \"/\", where before the slash it is the username or the organization name): those datasets are uploaded to the Hub by the community, and we, the Hugging Face team, do not supervise them; it is the responsibility of the user/organization implementing them properly if they want them to be used by other users.\r\n\r\nIn this specific case, there is no \"canonical\" dataset named \"few-nerd\". On the other hand, there are two \"community\" datasets named \"few-nerd\":\r\n- [\"nbroad/few-nerd\"](https://huggingface.co/datasets/nbroad/few-nerd)\r\n- [\"dfki-nlp/few-nerd\"](https://huggingface.co/datasets/dfki-nlp/few-nerd)\r\n\r\nIf they were properly implemented, you should be able to load them this way:\r\n```python\r\n# \"nbroad/few-nerd\" community dataset\r\nds = load_dataset(\"nbroad/few-nerd\", \"supervised\")\r\n\r\n# \"dfki-nlp/few-nerd\" community dataset\r\nds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\n```\r\n\r\nHowever, they are not correctly implemented and both of them give errors:\r\n- \"nbroad/few-nerd\":\r\n ```\r\n TypeError: expected str, bytes or os.PathLike object, not dict\r\n ```\r\n- \"dfki-nlp/few-nerd\":\r\n ```\r\n ConnectionError: Couldn't reach https://cloud.tsinghua.edu.cn/f/09265750ae6340429827/?dl=1\r\n ```\r\n\r\nYou could try to contact their users/organizations to inform them about their bugs and ask them if they are planning to fix them. Alternatively you could try to implement your own script for this dataset.",
"Thanks @albertvillanova for your detailed explanation! I will resort to my own scripts for now. ",
"Hello, @Mehrad0711; Hi, @albertvillanova !\r\nI am the maintainer of the `dfki/few-nerd\" dataset script, sorry for the very late reply and hope this message finds you well!\r\nWe should use\r\n```\r\ndataset = load_dataset(\"dfki-nlp/few-nerd\", name=\"supervised\")\r\n```\r\ninstead of not specifying the \"name\" argument, where name is from `[\"supervised\", \"inter\", \"intra\"]`. Otherwise the method just treats \"supervised\" as `split`, which we reserve after specifying the name, since for each name, there are three splits: train, dev and test.\r\n\r\nAlso we use Tsinghua server source to download data files since it is the official source referred in the paper where the dataset is released (even though it is cc-by-sa-4.0 licensed, means we can copy the data anywhere after mentioning the license\r\n). Sometimes the server just runs down due to high pressure, kinda weird (we encountered the same server problem serveral times a month when we conducted experiments on Few-NERD XD). I tried the script just now and it works perfectly!\r\n```\r\n>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 18824\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 37648\r\n })\r\n})\r\n>>> dataset[\"train\"]\r\nDataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n})\r\n>>> dataset[\"train\"][0]\r\n{'id': '0', 'tokens': ['Paul', 'International', 'airport', '.'], 'ner_tags': [0, 0, 0, 0], 'fine_ner_tags': [0, 0, 0, 0]}\r\n```\r\nAnyways if you cannot stand the pain with the server and its slow download speed, you can also download the `dfki/few-nerd.py` script from HF and change the `_URLs` to your personal drive (after you once successfully download the data and upload to your cloud drive), and then load the .py script locally.\r\n\r\nHope this reply can still be any help. If you still have problems with it, feel free to ask here and I am glad to help!\r\nBest wishes.",
"Hi @chen-yuxuan, thanks for your answer.\r\n\r\nJust a few comments:\r\n\r\n- Please, note that as we use `datasets.load_dataset` implementation, we can pass the configuration name as the second positional argument (no need to pass explicitly `name=`) and it downloads the 3 splits:\r\n```python\r\n In [4]: ds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.5k/11.5k [00:00<00:00, 2.85MB/s]\r\nDownloading and preparing dataset few_nerd/supervised to .cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e40882b71f037a4a1f232025899170fbe8113cd2f4a26dddd2add7222a077255...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6M/14.6M [01:16<00:00, 190kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [01:14<00:00, 160kB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.0M/12.0M [01:04<00:00, 186kB/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [03:58<00:00, 79.45s/it]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 3.11it/s]\r\n```\r\n\r\n- On the other hand, please note that your script does not work on Windows machines, because you call `open()` without passing the encoding parameter:\r\n```\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\dfki-nlp___few-nerd\\e40882b71f037a4a1f232025899170fbe8113cd2f4a26dddd2add7222a077255\\few-nerd.py in <genexpr>(.0)\r\n 276 assert filepath[-4:] == \".txt\"\r\n 277\r\n--> 278 num_lines = sum(1 for _ in open(filepath))\r\n 279 id = 0\r\n 280\r\n\r\n.venv\\lib\\encodings\\cp1252.py in decode(self, input, final)\r\n 21 class IncrementalDecoder(codecs.IncrementalDecoder):\r\n 22 def decode(self, input, final=False):\r\n---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]\r\n 24\r\n 25 class StreamWriter(Codec,codecs.StreamWriter):\r\n\r\nUnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 5238: character maps to <undefined>\r\n```\r\n\r\nIf you would like your script to be usable on Windows machines, you should pass `encoding=\"utf-8\"` to every `open()` function:\r\n- line 278: `num_lines = sum(1 for _ in open(filepath, encoding=\"utf-8\"))`\r\n- line 281: `with open(filepath, \"r\", encoding=\"utf-8\")`",
"Thank you @albertvillanova for your detailed feedback!\r\n\r\n> no need to pass explicitly `name=`\r\n\r\nGood catch! I thought `split` stands before `name` in the argument list... but now it is all clear to me, sounds cool! Thanks for the explanation.\r\n\r\nAnyways in our old code it still looks bit confusing if we only want one split but the function downloads all, so to allow efficient downloading, I optimized the code a bit so that only the specified split data is downloaded. now we get\r\n```\r\n>>> x = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading and preparing dataset few_nerd/supervised to /home/user/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/8e7ab598946cd5b395dcec6ea239123c8dff5b58b8e1c03b0c595b540248a885...\r\nDownloading: 100%|███████████████████████████████████████████████████████████████████| 14.6M/14.6M [01:01<00:00, 238kB/s]\r\n100%|██████████████████████████████████████████████████████████████████████| 3359329/3359329 [00:12<00:00, 275462.84it/s]\r\n100%|████████████████████████████████████████████████████████████████████████| 482037/482037 [00:01<00:00, 278633.64it/s]\r\n100%|████████████████████████████████████████████████████████████████████████| 958765/958765 [00:03<00:00, 267472.83it/s]\r\nDataset few_nerd downloaded and prepared to /home/user/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/8e7ab598946cd5b395dcec6ea239123c8dff5b58b8e1c03b0c595b540248a885. Subsequent calls will reuse this data.\r\n```\r\nwhere only one progress bar indicates downloading, and the three others just indicate pre-processing for the train, dev, test set.\r\n\r\nFor the encoding issue, I have made corresponding changes for the two lines you pointed out. However, I have no windows machine at hand, I would really appreciate it if you could help test on your end.\r\n\r\nAll the updates are uploaded to HF under `dfki-nlp` account where I am working for. \r\nThank you again for your kind help!\r\n",
"Hi @chen-yuxuan,\r\n\r\nI have tested on Windows and now it works perfectly, after the fixing of the encoding issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"dfki-nlp/few-nerd\", \"supervised\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.5k/11.5k [00:00<?, ?B/s]\r\nDownloading and preparing dataset few_nerd/supervised to C:\\Users\\username\\.cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e1ceeaee82073fea12206e4461c7cfcd67e68c8f3ebeca179bddcacee00c4511...\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3359329/3359329 [00:25<00:00, 129427.23it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 482037/482037 [00:03<00:00, 134513.66it/s]\r\n100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 958765/958765 [00:06<00:00, 143152.35it/s]\r\nDataset few_nerd downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\dfki-nlp___few_nerd\\supervised\\0.0.0\\e1ceeaee82073fea12206e4461c7cfcd67e68c8f3ebeca179bddcacee00c4511. Subsequent calls will reuse this data.765 [00:06<00:00, 139045.03it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 174.71it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 131767\r\n })\r\n validation: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 18824\r\n })\r\n test: Dataset({\r\n features: ['id', 'tokens', 'ner_tags', 'fine_ner_tags'],\r\n num_rows: 37648\r\n })\r\n})\r\n```"
] | "2021-08-02T22:18:57Z" | "2021-11-16T08:51:34Z" | "2021-08-03T19:45:43Z" | NONE | null | null | null | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError).
Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53...
Traceback (most recent call last):
File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split
disable=bool(logging.get_verbosity() == logging.NOTSET),
File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples
with open(filepath, encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json'
```
The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Python version: 3.8
- PyArrow version: 1.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2746/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5794/comments | https://api.github.com/repos/huggingface/datasets/issues/5794/events | https://github.com/huggingface/datasets/issues/5794 | 1,685,196,061 | I_kwDODunzps5kcg0d | 5,794 | CI ZeroDivisionError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [] | "2023-04-26T14:55:23Z" | "2023-04-26T14:55:23Z" | null | MEMBER | null | null | null | Sometimes when running our CI on Windows, we get a ZeroDivisionError:
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero
```
See for example:
- https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110
- https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1
def speed_metrics(split, start_time, num_samples=None, num_steps=None):
"""
Measure and return speed performance metrics.
This function requires a time snapshot `start_time` before the operation to be measured starts and this function
should be run immediately after the operation to be measured has completed.
Args:
- split: name to prefix metric (like train, eval, test...)
- start_time: operation start time
- num_samples: number of samples processed
"""
runtime = time.time() - start_time
result = {f"{split}_runtime": round(runtime, 4)}
if num_samples is not None:
> samples_per_second = num_samples / runtime
E ZeroDivisionError: float division by zero
C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5794/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5365/comments | https://api.github.com/repos/huggingface/datasets/issues/5365/events | https://github.com/huggingface/datasets/pull/5365 | 1,498,422,466 | PR_kwDODunzps5Fi6ZD | 5,365 | fix: image array should support other formats than uint8 | {
"avatar_url": "https://avatars.githubusercontent.com/u/30353?v=4",
"events_url": "https://api.github.com/users/vigsterkr/events{/privacy}",
"followers_url": "https://api.github.com/users/vigsterkr/followers",
"following_url": "https://api.github.com/users/vigsterkr/following{/other_user}",
"gists_url": "https://api.github.com/users/vigsterkr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vigsterkr",
"id": 30353,
"login": "vigsterkr",
"node_id": "MDQ6VXNlcjMwMzUz",
"organizations_url": "https://api.github.com/users/vigsterkr/orgs",
"received_events_url": "https://api.github.com/users/vigsterkr/received_events",
"repos_url": "https://api.github.com/users/vigsterkr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vigsterkr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vigsterkr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vigsterkr"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! \r\n\r\nI agree that the current type-casting (always cast to `np.uint8` as Tensorflow Datasets does) is a bit too harsh. However, not all dtypes are supported in `Image.fromarray` (e.g. np.int64), so we need to treat these with special care (e.g. downcast to the closest supported dtype, maybe with warnings to let the user know what's happening).\r\n\r\nPS: To avoid the CI failures, we need to handle two more instances of the cast to `np.uint8` (both are in the `image.py` file).",
"I've made some changes to the PR.\r\n\r\nNow the encoding procedure behaves as follows:\r\n* for multi-channel arrays: if their dtype is `int`/`uint`, cast to np.uint8 (the only supported dtype for multi-channel arrays), throw an error otherwise\r\n* if the array dtype is of valid kind (\"u\", \"i\", \"f\", ...):\r\n * don't do anything if Pillow natively supports it\r\n * otherwise, downcast until it becomes compatible with Pillow\r\n* raise an error if nothing from above is true",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009537 / 0.011353 (-0.001816) | 0.004946 / 0.011008 (-0.006062) | 0.100552 / 0.038508 (0.062043) | 0.035119 / 0.023109 (0.012009) | 0.295989 / 0.275898 (0.020091) | 0.361326 / 0.323480 (0.037846) | 0.007608 / 0.007986 (-0.000378) | 0.004151 / 0.004328 (-0.000177) | 0.077301 / 0.004250 (0.073050) | 0.042921 / 0.037052 (0.005869) | 0.304804 / 0.258489 (0.046315) | 0.345934 / 0.293841 (0.052093) | 0.038987 / 0.128546 (-0.089559) | 0.012055 / 0.075646 (-0.063591) | 0.334035 / 0.419271 (-0.085236) | 0.052679 / 0.043533 (0.009146) | 0.291700 / 0.255139 (0.036561) | 0.335423 / 0.283200 (0.052223) | 0.107002 / 0.141683 (-0.034680) | 1.516780 / 1.452155 (0.064625) | 1.514137 / 1.492716 (0.021420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014719 / 0.018006 (-0.003287) | 0.545251 / 0.000490 (0.544761) | 0.004719 / 0.000200 (0.004519) | 0.000275 / 0.000054 (0.000220) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026633 / 0.037411 (-0.010779) | 0.106911 / 0.014526 (0.092385) | 0.120258 / 0.176557 (-0.056299) | 0.156196 / 0.737135 (-0.580940) | 0.123132 / 0.296338 (-0.173207) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398018 / 0.215209 (0.182809) | 3.973992 / 2.077655 (1.896337) | 1.776436 / 1.504120 (0.272316) | 1.579036 / 1.541195 (0.037841) | 1.643345 / 1.468490 (0.174855) | 0.692408 / 4.584777 (-3.892369) | 3.757243 / 3.745712 (0.011531) | 3.226212 / 5.269862 (-2.043649) | 1.797845 / 4.565676 (-2.767831) | 0.085878 / 0.424275 (-0.338398) | 0.012451 / 0.007607 (0.004844) | 0.509755 / 0.226044 (0.283711) | 5.029035 / 2.268929 (2.760107) | 2.255507 / 55.444624 (-53.189117) | 1.892868 / 6.876477 (-4.983609) | 1.900017 / 2.142072 (-0.242055) | 0.853965 / 4.805227 (-3.951263) | 0.167268 / 6.500664 (-6.333396) | 0.062796 / 0.075469 (-0.012673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.183361 / 1.841788 (-0.658427) | 15.103797 / 8.074308 (7.029489) | 14.112931 / 10.191392 (3.921539) | 0.167234 / 0.680424 (-0.513190) | 0.029487 / 0.534201 (-0.504713) | 0.444121 / 0.579283 (-0.135162) | 0.437821 / 0.434364 (0.003457) | 0.544900 / 0.540337 (0.004562) | 0.642142 / 1.386936 (-0.744794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007078 / 0.011353 (-0.004275) | 0.004983 / 0.011008 (-0.006026) | 0.097106 / 0.038508 (0.058598) | 0.033747 / 0.023109 (0.010637) | 0.382030 / 0.275898 (0.106132) | 0.410193 / 0.323480 (0.086713) | 0.006658 / 0.007986 (-0.001327) | 0.005358 / 0.004328 (0.001029) | 0.073878 / 0.004250 (0.069628) | 0.049292 / 0.037052 (0.012240) | 0.384053 / 0.258489 (0.125564) | 0.427826 / 0.293841 (0.133985) | 0.036780 / 0.128546 (-0.091766) | 0.012469 / 0.075646 (-0.063178) | 0.332989 / 0.419271 (-0.086283) | 0.059531 / 0.043533 (0.015998) | 0.378431 / 0.255139 (0.123292) | 0.402672 / 0.283200 (0.119473) | 0.110782 / 0.141683 (-0.030901) | 1.484570 / 1.452155 (0.032416) | 1.608081 / 1.492716 (0.115365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232356 / 0.018006 (0.214350) | 0.545648 / 0.000490 (0.545158) | 0.003113 / 0.000200 (0.002913) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028138 / 0.037411 (-0.009273) | 0.110786 / 0.014526 (0.096260) | 0.123615 / 0.176557 (-0.052941) | 0.165773 / 0.737135 (-0.571362) | 0.126401 / 0.296338 (-0.169937) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440518 / 0.215209 (0.225309) | 4.393821 / 2.077655 (2.316166) | 2.295479 / 1.504120 (0.791359) | 2.116679 / 1.541195 (0.575485) | 2.215561 / 1.468490 (0.747071) | 0.722343 / 4.584777 (-3.862434) | 3.783360 / 3.745712 (0.037647) | 3.302242 / 5.269862 (-1.967620) | 1.681535 / 4.565676 (-2.884142) | 0.085738 / 0.424275 (-0.338537) | 0.012373 / 0.007607 (0.004766) | 0.540499 / 0.226044 (0.314455) | 5.384915 / 2.268929 (3.115986) | 2.766346 / 55.444624 (-52.678279) | 2.451994 / 6.876477 (-4.424483) | 2.505720 / 2.142072 (0.363647) | 0.833006 / 4.805227 (-3.972221) | 0.168206 / 6.500664 (-6.332458) | 0.064971 / 0.075469 (-0.010498) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253499 / 1.841788 (-0.588289) | 15.381840 / 8.074308 (7.307532) | 13.519493 / 10.191392 (3.328101) | 0.165559 / 0.680424 (-0.514865) | 0.017682 / 0.534201 (-0.516519) | 0.422248 / 0.579283 (-0.157035) | 0.422750 / 0.434364 (-0.011614) | 0.524546 / 0.540337 (-0.015792) | 0.626956 / 1.386936 (-0.759980) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d9a8d8af0961c473103516dd018e2d34d23cea02 \"CML watermark\")\n"
] | "2022-12-15T13:17:50Z" | "2023-01-26T18:46:45Z" | "2023-01-26T18:39:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5365",
"merged_at": "2023-01-26T18:39:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5365"
} | Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank.
`PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes).
although maybe some further metadata could be supplied via the [Image](https://huggingface.co/docs/datasets/v2.7.1/en/package_reference/main_classes#datasets.Image) object. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5365/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3818/comments | https://api.github.com/repos/huggingface/datasets/issues/3818/events | https://github.com/huggingface/datasets/issues/3818 | 1,158,788,545 | I_kwDODunzps5FEbXB | 3,818 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | {
"avatar_url": "https://avatars.githubusercontent.com/u/6901031?v=4",
"events_url": "https://api.github.com/users/lmvasque/events{/privacy}",
"followers_url": "https://api.github.com/users/lmvasque/followers",
"following_url": "https://api.github.com/users/lmvasque/following{/other_user}",
"gists_url": "https://api.github.com/users/lmvasque/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lmvasque",
"id": 6901031,
"login": "lmvasque",
"node_id": "MDQ6VXNlcjY5MDEwMzE=",
"organizations_url": "https://api.github.com/users/lmvasque/orgs",
"received_events_url": "https://api.github.com/users/lmvasque/received_events",
"repos_url": "https://api.github.com/users/lmvasque/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lmvasque/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmvasque/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lmvasque"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi, thanks for reporting! We can add a `sources: datasets.Value(\"string\")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR?",
"Hi Mario,\r\n\r\nThanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:\r\n```\r\n features=datasets.Features(\r\n {\r\n \"sources\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"predictions\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"references\": datasets.Sequence(datasets.Value(\"string\", id=\"sequence\"), id=\"references\"),\r\n }\r\n ),\r\n```\r\n\r\nBut that only avoids a failure in `encode_batch` in the `add_batch` method:\r\n```\r\n batch = {\"predictions\": predictions, \"references\": references}\r\n batch = self.info.features.encode_batch(batch)\r\n```\r\n\r\nThe real problem is that `add_batch()`, `add()` and `compute()` does not receive a `sources` param:\r\n```\r\ndef add_batch(self, *, predictions=None, references=None):\r\ndef add(self, *, prediction=None, reference=None):\r\ndef compute(self, *, predictions=None, references=None, **kwargs)\r\n```\r\n\r\nAnd then, it fails:\r\n`TypeError: add_batch() got an unexpected keyword argument sources`\r\n\r\nI need this for adding any metric based on SARI or alike, not only for sari.py :)\r\n\r\nLet me know if I understood correctly the proposed solution.\r\n",
"The `Metric` class has been modified recently to support this use-case, but the `add_batch` + `compute` pattern still doesn't work correctly. I'll open a PR."
] | "2022-03-03T18:57:54Z" | "2022-03-04T18:04:21Z" | "2022-03-04T18:04:21Z" | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input.
For example, when the `add_batch` method is used, then the `compute()` method fails:
```
metric = load_metric("sari")
metric.add_batch(
predictions=["About 95 you now get in ."],
references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]])
metric.compute()
> TypeError: _compute() missing 1 required positional argument: 'sources'
```
Therefore, the `compute() `method can only be used standalone:
```
metric = load_metric("sari")
result = metric.compute(
sources=["About 95 species are currently accepted ."],
predictions=["About 95 you now get in ."],
references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]])
> {'sari': 26.953601953601954}
```
**Describe the solution you'd like**
Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class.
```
add_batch(*, sources=None, predictions=None, references=None, **kwargs)
add(*, sources=None, predictions=None, references=None, **kwargs)
compute()
```
**Describe alternatives you've considered**
I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch).
**Additional context**
These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3818/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2884/comments | https://api.github.com/repos/huggingface/datasets/issues/2884/events | https://github.com/huggingface/datasets/pull/2884 | 992,135,698 | MDExOlB1bGxSZXF1ZXN0NzMwNTA4MTE1 | 2,884 | Add IC, SI, ER tasks to SUPERB | {
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anton-l",
"id": 26864830,
"login": "anton-l",
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"repos_url": "https://api.github.com/users/anton-l/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anton-l"
} | [] | closed | false | null | [] | null | [
"Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ",
"Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main\r\nAre we allowed to make these datasets public or would that violate the terms of their use?",
"@lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us. \nFor example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(",
"> @lewtun These ones all have non-permissive licences, so the mirrored data I linked is open only to the HF org for now. But we could try contacting the authors to ask if they'd like to host these with us.\r\n> For example VoxCeleb1 now has direct links (the ones in the script) that don't require form submission and passwords, but they ban IPs after each download for some reason :(\r\n\r\nI think there would be a lot of value added if the authors would be willing to host their data on the HF Hub! As an end-user of `datasets`, I've found I'm more likely to explore a dataset if I'm able to quickly pull the subsets without needing a manual download. Perhaps we can tell them that the Hub offers several advantages like versioning and interactive exploration (with `datasets-viewer`)?"
] | "2021-09-09T11:56:03Z" | "2021-09-20T09:17:58Z" | "2021-09-20T09:00:49Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2884",
"merged_at": "2021-09-20T09:00:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2884"
} | This PR adds 3 additional classification tasks to SUPERB
#### Intent Classification
Dataset URL seems to be down at the moment :( See the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands
#### Speaker Identification
Manual download script:
```
mkdir VoxCeleb1
cd VoxCeleb1
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad
cat vox1_dev* > vox1_dev_wav.zip
unzip vox1_dev_wav.zip
wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip
unzip vox1_test_wav.zip
# download the official SUPERB train-dev-test split
wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt
```
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification
#### Intent Classification
Manual download requires going through a slow application process, see the note below.
S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py
Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition
#### :warning: Note
These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2884/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2884/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3798/comments | https://api.github.com/repos/huggingface/datasets/issues/3798/events | https://github.com/huggingface/datasets/pull/3798 | 1,154,411,066 | PR_kwDODunzps4zrl5Y | 3,798 | Fix error message in CSV loader for newer Pandas versions | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | "2022-02-28T18:24:10Z" | "2022-02-28T18:51:39Z" | "2022-02-28T18:51:38Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3798",
"merged_at": "2022-02-28T18:51:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3798"
} | Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this:
```python
csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f
```
CC: @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3798/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5838/comments | https://api.github.com/repos/huggingface/datasets/issues/5838/events | https://github.com/huggingface/datasets/issues/5838 | 1,703,210,848 | I_kwDODunzps5lhO9g | 5,838 | Streaming support for `load_from_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Nilabhra",
"id": 5437792,
"login": "Nilabhra",
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Nilabhra"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"As the name says, `load_from_disk` load the data from your disk. If the data is hosted on S3, it is first downloaded locally and then loaded from your disk.\r\n\r\nThere is a discussion on streaming data from S3 here though: #5281 ",
"@lhoestq \r\nThanks for your comment. I have checked out the discussion before and attempted at replicating the mentioned changes in the main branch (#5580). What I found was that if a dataset is saved using `save_to_disk`, it cannot be read by `load_dataset`. The error message asks me to to use `load_from_disk` instead. What would be the correct way of saving the data in this scenario?",
"Using `push_to_hub` you can save the dataset on the HF Hub as parquet files, and reload it / stream it using `load_dataset` :)\r\n\r\nIf you want to save your dataset somewhere else you can use `.to_parquet` to get a parquet file. If your dataset is big it's usually recommended to shard it into multi parquet files (around 1GB each).",
"@lhoestq \r\nThanks for the explanation. Appreciate it. I'll try this out.",
"@lhoestq\r\nI tried the method you mentioned. This the current scenario I'm facing:\r\n\r\n- The parquet file can be read from disk and streaming can be enabled.\r\n- The parquet file can be read from `s3` (local MinIO).\r\n- When `streaming=True` is enabled for `s3`, I get the error mentioned below:\r\n\r\n```\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```\r\n\r\nDoes this mean there is a bug in the main branch?",
"Streaming from S3 is still experimental, there might be a few bugs unfortunately.\r\n\r\nCan you share the full stack trace ?",
"@lhoestq \r\nSure, here you go:\r\n\r\n```python\r\nTypeError Traceback (most recent call last)\r\nCell In[8], line 1\r\n----> 1 dataset = load_dataset(\"parquet\", data_files=[\"s3://<bucket name>/<data folder>/data-parquet\"], storage_options=fs.storage_options, streaming=True)\r\n\r\nFile ~/.../datasets/src/datasets/load.py:1790, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile ~/.../datasets/src/datasets/builder.py:1264, in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1257 dl_manager = StreamingDownloadManager(\r\n 1258 base_path=base_path or self.base_path,\r\n 1259 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1260 dataset_name=self.name,\r\n 1261 data_dir=self.config.data_dir,\r\n 1262 )\r\n 1263 self._check_manual_download(dl_manager)\r\n-> 1264 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1265 # By default, return all splits\r\n 1266 if split is None:\r\n\r\nFile ~/.../datasets/src/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager)\r\n 32 if not self.config.data_files:\r\n 33 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 34 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 35 if isinstance(data_files, (str, list, tuple)):\r\n 36 files = data_files\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1087, in StreamingDownloadManager.download_and_extract(self, url_or_urls)\r\n 1069 def download_and_extract(self, url_or_urls):\r\n 1070 \"\"\"Prepare given `url_or_urls` for streaming (add extraction protocol).\r\n 1071 \r\n 1072 This is the lazy version of `DownloadManager.download_and_extract` for streaming.\r\n (...)\r\n 1085 url(s): (`str` or `list` or `dict`), URL(s) to stream data from matching the given input `url_or_urls`.\r\n 1086 \"\"\"\r\n-> 1087 return self.extract(self.download(url_or_urls))\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1039, in StreamingDownloadManager.extract(self, url_or_urls)\r\n 1020 def extract(self, url_or_urls):\r\n 1021 \"\"\"Add extraction protocol for given url(s) for streaming.\r\n 1022 \r\n 1023 This is the lazy version of `DownloadManager.extract` for streaming.\r\n (...)\r\n 1037 ```\r\n 1038 \"\"\"\r\n-> 1039 urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)\r\n 1040 return urlpaths\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:443, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n--> 443 mapped = [\r\n 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:444, in <listcomp>(.0)\r\n 441 num_proc = 1\r\n 442 if num_proc <= 1 or len(iterable) < parallel_min_length:\r\n 443 mapped = [\r\n--> 444 _single_map_nested((function, obj, types, None, True, None))\r\n 445 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)\r\n 446 ]\r\n 447 else:\r\n 448 num_proc = num_proc if num_proc <= len(iterable) else len(iterable)\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in _single_map_nested(args)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:363, in <listcomp>(.0)\r\n 361 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}\r\n 362 else:\r\n--> 363 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]\r\n 364 if isinstance(data_struct, list):\r\n 365 return mapped\r\n\r\nFile ~/.../datasets/src/datasets/utils/py_utils.py:346, in _single_map_nested(args)\r\n 344 # Singleton first to spare some computation\r\n 345 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 346 return function(data_struct)\r\n 348 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n 349 if rank is not None and logging.get_verbosity() < logging.WARNING:\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:1044, in StreamingDownloadManager._extract(self, urlpath)\r\n 1042 def _extract(self, urlpath: str) -> str:\r\n 1043 urlpath = str(urlpath)\r\n-> 1044 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n 1045 # get inner file: zip://train-00000.json.gz::https://foo.bar/data.zip -> zip://train-00000.json.gz\r\n 1046 path = urlpath.split(\"::\")[0]\r\n\r\nFile ~/.../datasets/src/datasets/download/streaming_download_manager.py:433, in _get_extraction_protocol(urlpath, use_auth_token)\r\n 431 else:\r\n 432 urlpath, kwargs = urlpath, {}\r\n--> 433 with fsspec.open(urlpath, **kwargs) as f:\r\n 434 return _get_extraction_protocol_with_magic_number(f)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/core.py:102, in OpenFile.__enter__(self)\r\n 99 def __enter__(self):\r\n 100 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 102 f = self.fs.open(self.path, mode=mode)\r\n 104 self.fobjects = [f]\r\n 106 if self.compression is not None:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1199, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1197 else:\r\n 1198 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1199 f = self._open(\r\n 1200 path,\r\n 1201 mode=mode,\r\n 1202 block_size=block_size,\r\n 1203 autocommit=ac,\r\n 1204 cache_options=cache_options,\r\n 1205 **kwargs,\r\n 1206 )\r\n 1207 if compression is not None:\r\n 1208 from fsspec.compression import compr\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:659, in S3FileSystem._open(self, path, mode, block_size, acl, version_id, fill_cache, cache_type, autocommit, requester_pays, cache_options, **kwargs)\r\n 656 if cache_type is None:\r\n 657 cache_type = self.default_cache_type\r\n--> 659 return S3File(\r\n 660 self,\r\n 661 path,\r\n 662 mode,\r\n 663 block_size=block_size,\r\n 664 acl=acl,\r\n 665 version_id=version_id,\r\n 666 fill_cache=fill_cache,\r\n 667 s3_additional_kwargs=kw,\r\n 668 cache_type=cache_type,\r\n 669 autocommit=autocommit,\r\n 670 requester_pays=requester_pays,\r\n 671 cache_options=cache_options,\r\n 672 )\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:2043, in S3File.__init__(self, s3, path, mode, block_size, acl, version_id, fill_cache, s3_additional_kwargs, autocommit, cache_type, requester_pays, cache_options)\r\n 2041 self.details = s3.info(path)\r\n 2042 self.version_id = self.details.get(\"VersionId\")\r\n-> 2043 super().__init__(\r\n 2044 s3,\r\n 2045 path,\r\n 2046 mode,\r\n 2047 block_size,\r\n 2048 autocommit=autocommit,\r\n 2049 cache_type=cache_type,\r\n 2050 cache_options=cache_options,\r\n 2051 )\r\n 2052 self.s3 = self.fs # compatibility\r\n 2054 # when not using autocommit we want to have transactional state to manage\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1555, in AbstractBufferedFile.__init__(self, fs, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 1553 self.size = size\r\n 1554 else:\r\n-> 1555 self.size = self.details[\"size\"]\r\n 1556 self.cache = caches[cache_type](\r\n 1557 self.blocksize, self._fetch_range, self.size, **cache_options\r\n 1558 )\r\n 1559 else:\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/spec.py:1568, in AbstractBufferedFile.details(self)\r\n 1565 @property\r\n 1566 def details(self):\r\n 1567 if self._details is None:\r\n-> 1568 self._details = self.fs.info(self.path)\r\n 1569 return self._details\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:115, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def wrapper(*args, **kwargs):\r\n 114 self = obj or args[0]\r\n--> 115 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:100, in sync(loop, func, timeout, *args, **kwargs)\r\n 98 raise FSTimeoutError from return_result\r\n 99 elif isinstance(return_result, BaseException):\r\n--> 100 raise return_result\r\n 101 else:\r\n 102 return return_result\r\n\r\nFile ~/.../lib/python3.8/site-packages/fsspec/asyn.py:55, in _runner(event, coro, result, timeout)\r\n 53 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 54 try:\r\n---> 55 result[0] = await coro\r\n 56 except Exception as ex:\r\n 57 result[0] = ex\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:1248, in S3FileSystem._info(self, path, bucket, key, refresh, version_id)\r\n 1246 if key:\r\n 1247 try:\r\n-> 1248 out = await self._call_s3(\r\n 1249 \"head_object\",\r\n 1250 self.kwargs,\r\n 1251 Bucket=bucket,\r\n 1252 Key=key,\r\n 1253 **version_id_kw(version_id),\r\n 1254 **self.req_kw,\r\n 1255 )\r\n 1256 return {\r\n 1257 \"ETag\": out.get(\"ETag\", \"\"),\r\n 1258 \"LastModified\": out[\"LastModified\"],\r\n (...)\r\n 1264 \"ContentType\": out.get(\"ContentType\"),\r\n 1265 }\r\n 1266 except FileNotFoundError:\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:341, in S3FileSystem._call_s3(self, method, *akwarglist, **kwargs)\r\n 340 async def _call_s3(self, method, *akwarglist, **kwargs):\r\n--> 341 await self.set_session()\r\n 342 s3 = await self.get_s3(kwargs.get(\"Bucket\"))\r\n 343 method = getattr(s3, method)\r\n\r\nFile ~/.../lib/python3.8/site-packages/s3fs/core.py:502, in S3FileSystem.set_session(self, refresh, kwargs)\r\n 500 conf = AioConfig(**config_kwargs)\r\n 501 if self.session is None:\r\n--> 502 self.session = aiobotocore.session.AioSession(**self.kwargs)\r\n 504 for parameters in (config_kwargs, self.kwargs, init_kwargs, client_kwargs):\r\n 505 for option in (\"region_name\", \"endpoint_url\"):\r\n\r\nTypeError: __init__() got an unexpected keyword argument 'headers'\r\n```",
"Is `\"data-parquet\"` a file ? In `data_files` you should pass the paths to the parquet files (not to a directory). Glob patterns are not supported yet for S3 URLs.\r\n\r\nThe bug seems to happen because your provided data file has no extension. Because of that it tries to infer it from the file content, but fails because `_get_extraction_protocol` doesn't support S3 URLs yet.\r\n\r\n",
"@lhoestq \r\nThank you for your answer. Saving the file with `.parquet` extension solved the issue! This is really great! Really appreciate all the help! \r\n\r\nLet me know if I should close the issue or feel free to close it if you want.",
"Cool ! I'm glad it worked out :)\r\n\r\nSure feel free to close the issue, since the original question about streaming with load_from_disk has been answered anyway"
] | "2023-05-10T06:25:22Z" | "2023-05-12T09:37:45Z" | "2023-05-12T09:37:45Z" | NONE | null | null | null | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5838/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/norabelrose",
"id": 39116809,
"login": "norabelrose",
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"type": "User",
"url": "https://api.github.com/users/norabelrose"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [
"Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arrow<->python conversions especially during writing.\r\n\r\nI'll let you know how it goes !",
"@lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for single column operations?\r\n\r\nIf that's not a priority for the maintainers right now, I could try my hand at adding the feature, but I can't guarantee I would do a good job given my lack of familiarity with pyarrow.",
"Currently the optimal setup for single-column computations is probably to do something like\r\n```python\r\nresult = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n```\r\nThis has two advantages:\r\n- input_columns=\"my_col\" allows to only read the column \"my_col\"\r\n- remove_columns=dataset.column_names makes `map` only keep the output of your function `f`, and it drops the other columns of the dataset instead of keeping them.\r\n\r\nLet me know if it improves speed on your side.\r\n\r\nYou can also get more speed by using `batched=True` and setting `num_proc=` for multiprocessing",
"Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```",
"Hi ! Can you open a separate issue for that ?\r\nAlso if you could provide a google colab or a sample code to reproduce this issue that would be helpful.\r\nOn my side I was not able to reproduce this error.",
"@lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and then discarding everything except the selected column, instead of exploiting the columnar data format to only load the selected column.\r\n\r\nMy code is like this:\r\n```\r\n self.dataset = self.dataset.sort('num_tokens')\r\n batch_dataset = self.dataset.map(\r\n\tcompute_uniform_sized_batches,\r\n\tbatched=True, batch_size=10_000, num_proc=10, input_columns=['num_tokens'],\r\n\tremove_columns=get_columns_all_equal(self.dataset),\r\n\twith_indices=True,\r\n\tfn_kwargs=dict(max_size=tokens_per_batch)\r\n)\r\nself.batches = {\r\n\tname: list(zip(split['start'], split['length']))\r\n\tfor name, split in batch_dataset.items()\r\n}\r\n```\r\nI find that the processes with higher IDs take significantly longer to complete, presumably because the dataset is sorted by article length and they're loading the entire article text into memory, instead of just the 'num_tokens' column.\r\n\r\nI should note that my batching procedure would work best if I just used `batch_size=None` and loaded the whole column into memory at once, but I found that this was intolerably slow and gave me no progress information, so I'm using the less than ideal `batch_size=10_000`.",
"Hi @norabelrose ! I'm glad you managed to make this work on your side.\r\nRegarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.\r\n\r\nIn the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the batch as a python dict, then it updates it using the output of your mapping function, and finally it removes columns from `remove_columns`. Therefore for a moment some columns are loaded in memory even if you remove them or don't use them for your mapping function.\r\n\r\nIt would be nice to have a way to optimize memory for cases such as yours !",
"@lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:\r\n- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`\r\n- change `Dataset._getitem()` so that it passes `self._data.drop(drop_columns)` to the `query_table()` function whenever `format_columns` is non-None and `output_all_columns` is False, instead of `self._data` itself",
"Looks like a great direction :)\r\nNote that `query_table` doesn't bring data into memory. Only `format_table` does.\r\nAlso the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:\r\n```python\r\n# before the `map` main for loop\r\ninput_columns = input_columns if input_columns is not None else self.column_names\r\nif not self._output_all_columns:\r\n columns = [col for col in input_columns if self._format_columns is None or col in self._format_columns]\r\n input_dataset = self.with_format(\r\n type=self._format_type,\r\n columns=columns\r\n )\r\nelse:\r\n # in this case we could find a way to filter both format_columns and unformatted columns eventually\r\n input_dataset = self\r\n# then input_dataset can be used in the main for loop of `map`\r\n```\r\n\r\nEDIT: oh and regarding streaming format versus file format for arrow, we plan to start using the file format #1933 at one point (though I'm not sure if it would improve performance)",
"Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpolation search was implemented, so it may have had more to do with the slow ChunkedArray slice implementation than anything else.\r\n\r\nIf `query_table` is I/O free then the fix may be as simple as just adding this to line 1779 of arrow_dataset.py:\r\n```python\r\n# Only load the columns we actually need\r\nif input_columns:\r\n stack.enter_context(self.formatted_as(\r\n self._format_type,\r\n columns=input_columns,\r\n output_all_columns=False,\r\n **self._format_kwargs\r\n ))\r\n```\r\nIt's not clear to me why the `[col for col in input_columns if self._format_columns is None or col in self._format_columns]` check would be necessary— it seems like either `input_columns` should simply temporarily override the `_format_columns` within the `map` operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within `map`, but maybe I'm just missing it.",
"`query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.\r\nAlso I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.\r\n\r\n> It's not clear to me why the [col for col in input_columns if self._format_columns is None or col in self._format_columns] check would be necessary— it seems like either input_columns should simply temporarily override the _format_columns within the map operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within map, but maybe I'm just missing it.\r\n\r\nActually yes we can just use input_columns. And we do need to add a check to make sure there are not conflicts or this could lead to confusing errors.",
"That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take somewhere on the order of 10 minutes."
] | "2021-04-08T18:16:14Z" | "2021-04-26T16:13:59Z" | "2021-04-26T16:13:59Z" | CONTRIBUTOR | null | null | null | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4413/comments | https://api.github.com/repos/huggingface/datasets/issues/4413/events | https://github.com/huggingface/datasets/issues/4413 | 1,250,259,822 | I_kwDODunzps5KhXNu | 4,413 | Dataset Viewer issue for ett | {
"avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4",
"events_url": "https://api.github.com/users/dgcnz/events{/privacy}",
"followers_url": "https://api.github.com/users/dgcnz/followers",
"following_url": "https://api.github.com/users/dgcnz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dgcnz",
"id": 24966039,
"login": "dgcnz",
"node_id": "MDQ6VXNlcjI0OTY2MDM5",
"organizations_url": "https://api.github.com/users/dgcnz/orgs",
"received_events_url": "https://api.github.com/users/dgcnz/received_events",
"repos_url": "https://api.github.com/users/dgcnz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dgcnz"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
}
] | null | [
"Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co/datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ",
"I've just resent the refresh of the preview to the new endpoint, without success.\r\n\r\nCC: @severo ",
"Fixed!\r\n\r\nhttps://huggingface.co/datasets/ett/viewer/h1/test\r\n\r\n<img width=\"982\" alt=\"Capture d’écran 2022-06-15 à 09 30 22\" src=\"https://user-images.githubusercontent.com/1676121/173769035-a075d753-ecfc-4a43-b54b-973105d464d3.png\">\r\n"
] | "2022-05-27T02:12:35Z" | "2022-06-15T07:30:46Z" | "2022-06-15T07:30:46Z" | NONE | null | null | null | ### Link
https://huggingface.co/datasets/ett
### Description
Timestamp is not JSON serializable.
```
Status code: 500
Exception: Status500Error
Message: Type is not JSON serializable: Timestamp
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4413/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5214/comments | https://api.github.com/repos/huggingface/datasets/issues/5214/events | https://github.com/huggingface/datasets/pull/5214 | 1,440,334,978 | PR_kwDODunzps5CbmWE | 5,214 | Update github pr docs actions | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5214). All of your documentation changes will be reflected on that endpoint."
] | "2022-11-08T14:43:37Z" | "2022-11-08T15:39:58Z" | "2022-11-08T15:39:57Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5214",
"merged_at": "2022-11-08T15:39:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5214"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5214/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5310/comments | https://api.github.com/repos/huggingface/datasets/issues/5310/events | https://github.com/huggingface/datasets/pull/5310 | 1,467,719,635 | PR_kwDODunzps5D3rGw | 5,310 | Support xPath for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-11-29T09:20:47Z" | "2022-11-30T12:00:09Z" | "2022-11-30T11:57:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5310",
"merged_at": "2022-11-30T11:57:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5310"
} | This PR implements a string representation of `xPath`, which is valid for local paths (also windows) and remote URLs.
Additionally, some `os.path` methods are fixed for remote URLs on Windows machines.
Now, on Windows machines:
```python
In [2]: str(xPath("C:\\dir\\file.txt"))
Out[2]: 'C:\\dir\\file.txt'
In [3]: str(xPath("http://domain.com/file.txt"))
Out[3]: 'http://domain.com/file.txt'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5310/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6230/comments | https://api.github.com/repos/huggingface/datasets/issues/6230/events | https://github.com/huggingface/datasets/pull/6230 | 1,890,521,006 | PR_kwDODunzps5aBh6L | 6,230 | Don't skip hidden files in `dl_manager.iter_files` when they are given as input | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005894 / 0.011353 (-0.005459) | 0.003621 / 0.011008 (-0.007387) | 0.080446 / 0.038508 (0.041938) | 0.056800 / 0.023109 (0.033691) | 0.326485 / 0.275898 (0.050587) | 0.376207 / 0.323480 (0.052727) | 0.004640 / 0.007986 (-0.003346) | 0.002795 / 0.004328 (-0.001533) | 0.062815 / 0.004250 (0.058565) | 0.045761 / 0.037052 (0.008709) | 0.341417 / 0.258489 (0.082928) | 0.373129 / 0.293841 (0.079288) | 0.027226 / 0.128546 (-0.101321) | 0.007873 / 0.075646 (-0.067774) | 0.261737 / 0.419271 (-0.157535) | 0.044648 / 0.043533 (0.001115) | 0.320195 / 0.255139 (0.065056) | 0.381892 / 0.283200 (0.098692) | 0.020431 / 0.141683 (-0.121252) | 1.405332 / 1.452155 (-0.046823) | 1.455592 / 1.492716 (-0.037125) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191539 / 0.018006 (0.173533) | 0.423655 / 0.000490 (0.423165) | 0.002741 / 0.000200 (0.002541) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023952 / 0.037411 (-0.013459) | 0.073387 / 0.014526 (0.058861) | 0.083746 / 0.176557 (-0.092810) | 0.144977 / 0.737135 (-0.592159) | 0.083808 / 0.296338 (-0.212530) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436228 / 0.215209 (0.221019) | 4.370510 / 2.077655 (2.292855) | 2.340426 / 1.504120 (0.836306) | 2.202215 / 1.541195 (0.661021) | 2.258528 / 1.468490 (0.790037) | 0.503455 / 4.584777 (-4.081322) | 3.043695 / 3.745712 (-0.702017) | 2.784033 / 5.269862 (-2.485829) | 1.847956 / 4.565676 (-2.717721) | 0.057702 / 0.424275 (-0.366573) | 0.006703 / 0.007607 (-0.000904) | 0.510628 / 0.226044 (0.284583) | 5.101890 / 2.268929 (2.832961) | 2.816469 / 55.444624 (-52.628155) | 2.474220 / 6.876477 (-4.402257) | 2.617851 / 2.142072 (0.475779) | 0.593585 / 4.805227 (-4.211642) | 0.125895 / 6.500664 (-6.374769) | 0.062170 / 0.075469 (-0.013299) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238792 / 1.841788 (-0.602996) | 18.096417 / 8.074308 (10.022108) | 13.548778 / 10.191392 (3.357386) | 0.144878 / 0.680424 (-0.535546) | 0.016644 / 0.534201 (-0.517557) | 0.334556 / 0.579283 (-0.244728) | 0.343680 / 0.434364 (-0.090684) | 0.383093 / 0.540337 (-0.157244) | 0.525075 / 1.386936 (-0.861861) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006125 / 0.011353 (-0.005228) | 0.003668 / 0.011008 (-0.007340) | 0.062650 / 0.038508 (0.024142) | 0.058882 / 0.023109 (0.035772) | 0.454643 / 0.275898 (0.178745) | 0.486659 / 0.323480 (0.163179) | 0.005558 / 0.007986 (-0.002427) | 0.002858 / 0.004328 (-0.001471) | 0.062603 / 0.004250 (0.058353) | 0.049701 / 0.037052 (0.012649) | 0.455903 / 0.258489 (0.197413) | 0.491544 / 0.293841 (0.197703) | 0.028581 / 0.128546 (-0.099965) | 0.008040 / 0.075646 (-0.067607) | 0.068314 / 0.419271 (-0.350957) | 0.040637 / 0.043533 (-0.002896) | 0.450288 / 0.255139 (0.195149) | 0.476330 / 0.283200 (0.193131) | 0.018989 / 0.141683 (-0.122693) | 1.455122 / 1.452155 (0.002967) | 1.496941 / 1.492716 (0.004225) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227382 / 0.018006 (0.209376) | 0.432637 / 0.000490 (0.432147) | 0.002727 / 0.000200 (0.002527) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026125 / 0.037411 (-0.011286) | 0.081342 / 0.014526 (0.066817) | 0.091227 / 0.176557 (-0.085329) | 0.145175 / 0.737135 (-0.591960) | 0.091988 / 0.296338 (-0.204351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454293 / 0.215209 (0.239083) | 4.537912 / 2.077655 (2.460257) | 2.489146 / 1.504120 (0.985026) | 2.307166 / 1.541195 (0.765971) | 2.380866 / 1.468490 (0.912376) | 0.509015 / 4.584777 (-4.075762) | 3.111069 / 3.745712 (-0.634644) | 2.839181 / 5.269862 (-2.430681) | 1.874630 / 4.565676 (-2.691047) | 0.058540 / 0.424275 (-0.365735) | 0.006693 / 0.007607 (-0.000914) | 0.528408 / 0.226044 (0.302363) | 5.285802 / 2.268929 (3.016874) | 2.952090 / 55.444624 (-52.492534) | 2.591496 / 6.876477 (-4.284980) | 2.741080 / 2.142072 (0.599007) | 0.595610 / 4.805227 (-4.209617) | 0.124387 / 6.500664 (-6.376277) | 0.061032 / 0.075469 (-0.014437) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.365816 / 1.841788 (-0.475972) | 18.684534 / 8.074308 (10.610226) | 14.540438 / 10.191392 (4.349046) | 0.146793 / 0.680424 (-0.533631) | 0.018165 / 0.534201 (-0.516036) | 0.333794 / 0.579283 (-0.245489) | 0.345533 / 0.434364 (-0.088830) | 0.384453 / 0.540337 (-0.155885) | 0.529104 / 1.386936 (-0.857832) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#6c884967dd5f4e8aa3d1f3c2e3a414ae53afe261 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003683 / 0.011008 (-0.007325) | 0.083329 / 0.038508 (0.044821) | 0.063350 / 0.023109 (0.040241) | 0.329959 / 0.275898 (0.054061) | 0.396111 / 0.323480 (0.072631) | 0.003554 / 0.007986 (-0.004432) | 0.002907 / 0.004328 (-0.001421) | 0.064152 / 0.004250 (0.059902) | 0.049182 / 0.037052 (0.012130) | 0.343862 / 0.258489 (0.085373) | 0.414568 / 0.293841 (0.120727) | 0.027157 / 0.128546 (-0.101389) | 0.007957 / 0.075646 (-0.067689) | 0.261868 / 0.419271 (-0.157404) | 0.044938 / 0.043533 (0.001405) | 0.318470 / 0.255139 (0.063331) | 0.393319 / 0.283200 (0.110119) | 0.022848 / 0.141683 (-0.118835) | 1.419916 / 1.452155 (-0.032238) | 1.508783 / 1.492716 (0.016067) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200530 / 0.018006 (0.182523) | 0.433586 / 0.000490 (0.433097) | 0.002063 / 0.000200 (0.001863) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024803 / 0.037411 (-0.012609) | 0.075894 / 0.014526 (0.061368) | 0.086488 / 0.176557 (-0.090069) | 0.149058 / 0.737135 (-0.588077) | 0.087046 / 0.296338 (-0.209292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390771 / 0.215209 (0.175562) | 3.886178 / 2.077655 (1.808523) | 1.868626 / 1.504120 (0.364506) | 1.708532 / 1.541195 (0.167338) | 1.788491 / 1.468490 (0.320001) | 0.505706 / 4.584777 (-4.079071) | 3.062094 / 3.745712 (-0.683618) | 2.898559 / 5.269862 (-2.371302) | 1.901225 / 4.565676 (-2.664452) | 0.058366 / 0.424275 (-0.365909) | 0.006851 / 0.007607 (-0.000756) | 0.465382 / 0.226044 (0.239337) | 4.650187 / 2.268929 (2.381258) | 2.316152 / 55.444624 (-53.128472) | 1.989597 / 6.876477 (-4.886879) | 2.169266 / 2.142072 (0.027194) | 0.593257 / 4.805227 (-4.211970) | 0.126440 / 6.500664 (-6.374224) | 0.062227 / 0.075469 (-0.013242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283591 / 1.841788 (-0.558197) | 18.384667 / 8.074308 (10.310358) | 14.079611 / 10.191392 (3.888219) | 0.150453 / 0.680424 (-0.529971) | 0.017100 / 0.534201 (-0.517101) | 0.330503 / 0.579283 (-0.248780) | 0.348134 / 0.434364 (-0.086230) | 0.385726 / 0.540337 (-0.154612) | 0.529147 / 1.386936 (-0.857789) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006168 / 0.011353 (-0.005185) | 0.003801 / 0.011008 (-0.007208) | 0.063168 / 0.038508 (0.024660) | 0.062331 / 0.023109 (0.039221) | 0.448321 / 0.275898 (0.172423) | 0.484416 / 0.323480 (0.160937) | 0.004827 / 0.007986 (-0.003159) | 0.002848 / 0.004328 (-0.001480) | 0.062736 / 0.004250 (0.058486) | 0.049128 / 0.037052 (0.012075) | 0.449276 / 0.258489 (0.190787) | 0.499035 / 0.293841 (0.205194) | 0.028577 / 0.128546 (-0.099969) | 0.008114 / 0.075646 (-0.067532) | 0.068297 / 0.419271 (-0.350974) | 0.040835 / 0.043533 (-0.002698) | 0.453556 / 0.255139 (0.198417) | 0.475420 / 0.283200 (0.192220) | 0.020292 / 0.141683 (-0.121390) | 1.472226 / 1.452155 (0.020071) | 1.523809 / 1.492716 (0.031093) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230662 / 0.018006 (0.212655) | 0.439697 / 0.000490 (0.439207) | 0.009899 / 0.000200 (0.009699) | 0.000087 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026418 / 0.037411 (-0.010993) | 0.082188 / 0.014526 (0.067662) | 0.091039 / 0.176557 (-0.085518) | 0.146646 / 0.737135 (-0.590489) | 0.091693 / 0.296338 (-0.204645) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.462086 / 0.215209 (0.246877) | 4.620925 / 2.077655 (2.543271) | 2.539234 / 1.504120 (1.035114) | 2.371178 / 1.541195 (0.829983) | 2.440538 / 1.468490 (0.972048) | 0.511047 / 4.584777 (-4.073730) | 3.082088 / 3.745712 (-0.663624) | 2.918162 / 5.269862 (-2.351700) | 1.899651 / 4.565676 (-2.666025) | 0.059003 / 0.424275 (-0.365272) | 0.006746 / 0.007607 (-0.000861) | 0.537863 / 0.226044 (0.311819) | 5.382355 / 2.268929 (3.113426) | 3.060091 / 55.444624 (-52.384534) | 2.754969 / 6.876477 (-4.121507) | 2.863156 / 2.142072 (0.721084) | 0.606888 / 4.805227 (-4.198339) | 0.127448 / 6.500664 (-6.373216) | 0.062975 / 0.075469 (-0.012494) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336065 / 1.841788 (-0.505722) | 19.019902 / 8.074308 (10.945594) | 15.057979 / 10.191392 (4.866587) | 0.160646 / 0.680424 (-0.519778) | 0.018340 / 0.534201 (-0.515861) | 0.341664 / 0.579283 (-0.237619) | 0.356536 / 0.434364 (-0.077828) | 0.393974 / 0.540337 (-0.146363) | 0.546036 / 1.386936 (-0.840900) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd04e445bd36d7eb4af4d5a6b8519ab8e306ecf5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007220 / 0.011353 (-0.004132) | 0.004537 / 0.011008 (-0.006471) | 0.087333 / 0.038508 (0.048825) | 0.095637 / 0.023109 (0.072528) | 0.323819 / 0.275898 (0.047921) | 0.358838 / 0.323480 (0.035358) | 0.005910 / 0.007986 (-0.002076) | 0.003781 / 0.004328 (-0.000548) | 0.064565 / 0.004250 (0.060315) | 0.062818 / 0.037052 (0.025766) | 0.322595 / 0.258489 (0.064106) | 0.371865 / 0.293841 (0.078024) | 0.031667 / 0.128546 (-0.096880) | 0.009068 / 0.075646 (-0.066579) | 0.290574 / 0.419271 (-0.128697) | 0.054618 / 0.043533 (0.011085) | 0.314708 / 0.255139 (0.059569) | 0.336647 / 0.283200 (0.053447) | 0.027070 / 0.141683 (-0.114613) | 1.500640 / 1.452155 (0.048485) | 1.586775 / 1.492716 (0.094059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.294461 / 0.018006 (0.276455) | 0.580125 / 0.000490 (0.579635) | 0.008165 / 0.000200 (0.007965) | 0.000320 / 0.000054 (0.000266) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032352 / 0.037411 (-0.005059) | 0.092187 / 0.014526 (0.077661) | 0.104993 / 0.176557 (-0.071564) | 0.162738 / 0.737135 (-0.574397) | 0.103242 / 0.296338 (-0.193096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396732 / 0.215209 (0.181523) | 3.955049 / 2.077655 (1.877394) | 1.876762 / 1.504120 (0.372642) | 1.698477 / 1.541195 (0.157282) | 1.847086 / 1.468490 (0.378596) | 0.488306 / 4.584777 (-4.096471) | 3.658922 / 3.745712 (-0.086790) | 3.559050 / 5.269862 (-1.710812) | 2.187363 / 4.565676 (-2.378313) | 0.059795 / 0.424275 (-0.364480) | 0.008966 / 0.007607 (0.001359) | 0.474212 / 0.226044 (0.248168) | 4.732540 / 2.268929 (2.463611) | 2.466370 / 55.444624 (-52.978254) | 2.112105 / 6.876477 (-4.764372) | 2.414624 / 2.142072 (0.272552) | 0.595447 / 4.805227 (-4.209780) | 0.136705 / 6.500664 (-6.363959) | 0.062267 / 0.075469 (-0.013202) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266518 / 1.841788 (-0.575270) | 21.009975 / 8.074308 (12.935666) | 14.823960 / 10.191392 (4.632568) | 0.165630 / 0.680424 (-0.514793) | 0.018499 / 0.534201 (-0.515702) | 0.396720 / 0.579283 (-0.182563) | 0.424807 / 0.434364 (-0.009557) | 0.463326 / 0.540337 (-0.077011) | 0.653132 / 1.386936 (-0.733804) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007789 / 0.011353 (-0.003564) | 0.004720 / 0.011008 (-0.006288) | 0.066656 / 0.038508 (0.028148) | 0.094219 / 0.023109 (0.071109) | 0.414965 / 0.275898 (0.139067) | 0.454808 / 0.323480 (0.131328) | 0.006088 / 0.007986 (-0.001898) | 0.003980 / 0.004328 (-0.000349) | 0.066048 / 0.004250 (0.061797) | 0.065875 / 0.037052 (0.028823) | 0.419994 / 0.258489 (0.161505) | 0.462001 / 0.293841 (0.168160) | 0.033534 / 0.128546 (-0.095013) | 0.009010 / 0.075646 (-0.066636) | 0.072778 / 0.419271 (-0.346493) | 0.049834 / 0.043533 (0.006301) | 0.411003 / 0.255139 (0.155864) | 0.430918 / 0.283200 (0.147718) | 0.025664 / 0.141683 (-0.116019) | 1.526771 / 1.452155 (0.074616) | 1.634767 / 1.492716 (0.142051) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271180 / 0.018006 (0.253174) | 0.576704 / 0.000490 (0.576214) | 0.004362 / 0.000200 (0.004162) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035648 / 0.037411 (-0.001763) | 0.102407 / 0.014526 (0.087881) | 0.111613 / 0.176557 (-0.064944) | 0.166173 / 0.737135 (-0.570962) | 0.113371 / 0.296338 (-0.182967) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436031 / 0.215209 (0.220822) | 4.347071 / 2.077655 (2.269416) | 2.366937 / 1.504120 (0.862817) | 2.216356 / 1.541195 (0.675161) | 2.335933 / 1.468490 (0.867443) | 0.490484 / 4.584777 (-4.094293) | 3.730656 / 3.745712 (-0.015056) | 3.497248 / 5.269862 (-1.772613) | 2.215729 / 4.565676 (-2.349947) | 0.057905 / 0.424275 (-0.366370) | 0.007983 / 0.007607 (0.000376) | 0.510413 / 0.226044 (0.284369) | 5.114502 / 2.268929 (2.845574) | 2.871599 / 55.444624 (-52.573026) | 2.537514 / 6.876477 (-4.338962) | 2.819135 / 2.142072 (0.677063) | 0.588397 / 4.805227 (-4.216830) | 0.134665 / 6.500664 (-6.365999) | 0.063349 / 0.075469 (-0.012120) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352962 / 1.841788 (-0.488826) | 21.628664 / 8.074308 (13.554356) | 15.962105 / 10.191392 (5.770713) | 0.167781 / 0.680424 (-0.512643) | 0.020965 / 0.534201 (-0.513236) | 0.402809 / 0.579283 (-0.176474) | 0.435153 / 0.434364 (0.000789) | 0.481394 / 0.540337 (-0.058944) | 0.658068 / 1.386936 (-0.728868) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#12adf38b90fde8e2a4e46fcbb023ee23b5c4e98c \"CML watermark\")\n"
] | "2023-09-11T13:29:19Z" | "2023-09-13T18:21:28Z" | "2023-09-13T18:12:09Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6230",
"merged_at": "2023-09-13T18:12:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6230"
} | Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6230/timeline | null | null | true |