id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
listlengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
1,294,475,931
https://api.github.com/repos/huggingface/datasets/issues/4635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4635/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-07-06T07:13:33Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4635
NONE
completed
null
null
[ "Thanks for reporting, @e-tornike \r\n\r\nSome context:\r\n- #4527 \r\n\r\nThe dataset loads locally in streaming mode:\r\n```python\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"vadis/sv-ident\", split=\"validation\", streaming=True); item = next(iter(ds)); item\r\nUsing custom data configuration default\r\nOut[2]: \r\n{'sentence': 'Im Falle von Umweltbelastungen kann selten eindeutig entschieden werden, ob Unbedenklichkeitswerte bereits erreicht oder überschritten sind, die die menschliche Gesundheit oder andere Wohlfahrts»güter« beeinträchtigen.',\r\n 'is_variable': 0,\r\n 'variable': [],\r\n 'research_data': [],\r\n 'doc_id': '51971',\r\n 'uuid': 'ee3d7f88-1a3e-4a59-997f-e986b544a604',\r\n 'lang': 'de'}\r\n```", "~~I have forced the refresh of the split in the preview without success.~~\r\n\r\nI have forced the refresh of the split in the preview, and now it works.", "Preview seems to work now. \r\n\r\nhttps://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation", "OK, thank you @e-tornike.\r\n\r\nApparently, after forcing the refresh, we just had to wait a little until it is effectively refreshed. ", "I'm closing this issue as it was solved after forcing the refresh of the split in the preview.", "Thanks a lot! :)" ]
Dataset Viewer issue for vadis/sv-ident
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4635/reactions" }
I_kwDODunzps5NKCKb
null
2022-07-05T15:48:13Z
https://api.github.com/repos/huggingface/datasets/issues/4635/comments
### Link https://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation ### Description Error message when loading validation split in the viewer: ``` Status code: 400 Exception: Status400Error Message: The split cache is empty. ``` ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/20404466?v=4", "events_url": "https://api.github.com/users/e-tornike/events{/privacy}", "followers_url": "https://api.github.com/users/e-tornike/followers", "following_url": "https://api.github.com/users/e-tornike/following{/other_user}", "gists_url": "https://api.github.com/users/e-tornike/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/e-tornike", "id": 20404466, "login": "e-tornike", "node_id": "MDQ6VXNlcjIwNDA0NDY2", "organizations_url": "https://api.github.com/users/e-tornike/orgs", "received_events_url": "https://api.github.com/users/e-tornike/received_events", "repos_url": "https://api.github.com/users/e-tornike/repos", "site_admin": false, "starred_url": "https://api.github.com/users/e-tornike/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-tornike/subscriptions", "type": "User", "url": "https://api.github.com/users/e-tornike" }
https://api.github.com/repos/huggingface/datasets/issues/4635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4635/timeline
closed
false
4,635
null
2022-07-06T07:12:14Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,294,405,251
https://api.github.com/repos/huggingface/datasets/issues/4634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4634/events
[]
null
2022-09-13T14:07:32Z
[]
https://github.com/huggingface/datasets/issues/4634
NONE
completed
null
null
[ "Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid." ]
Can't load the Hausa audio dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4634/reactions" }
I_kwDODunzps5NJw6D
null
2022-07-05T14:47:36Z
https://api.github.com/repos/huggingface/datasets/issues/4634/comments
common_voice_train = load_dataset("common_voice", "ha", split="train+validation")
{ "avatar_url": "https://avatars.githubusercontent.com/u/19976800?v=4", "events_url": "https://api.github.com/users/moro23/events{/privacy}", "followers_url": "https://api.github.com/users/moro23/followers", "following_url": "https://api.github.com/users/moro23/following{/other_user}", "gists_url": "https://api.github.com/users/moro23/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/moro23", "id": 19976800, "login": "moro23", "node_id": "MDQ6VXNlcjE5OTc2ODAw", "organizations_url": "https://api.github.com/users/moro23/orgs", "received_events_url": "https://api.github.com/users/moro23/received_events", "repos_url": "https://api.github.com/users/moro23/repos", "site_admin": false, "starred_url": "https://api.github.com/users/moro23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moro23/subscriptions", "type": "User", "url": "https://api.github.com/users/moro23" }
https://api.github.com/repos/huggingface/datasets/issues/4634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4634/timeline
closed
false
4,634
null
2022-09-13T14:07:32Z
null
false
1,294,367,783
https://api.github.com/repos/huggingface/datasets/issues/4633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4633/events
[]
null
2022-07-18T13:20:29Z
[]
https://github.com/huggingface/datasets/pull/4633
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I ran a script to find affected datasets (just did it on non-private non-gated). Adding \"testing\" and \"evaluation\" fixes all of of them except one:\r\n- projecte-aina/cat_manynames:\thuman_annotated_testset.tsv\r\n\r\nLet me open a PR on their repository to fix it\r\nEDIT: pr [here](https://huggingface.co/datasets/projecte-aina/cat_manynames/discussions/2)", "Feel free to merge @albertvillanova if it's all good to you :)", "Thanks for the feedback @albertvillanova I took your comments into account :)\r\n- added numbers as supported delimiters\r\n- used list comprehension to create the patterns list\r\n- updated the docs and the tests according to your comments\r\n\r\nLet me know what you think !", "I ended up removing the patching and the context manager :) merging" ]
[data_files] Only match separated split names
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4633/reactions" }
PR_kwDODunzps462_qX
{ "diff_url": "https://github.com/huggingface/datasets/pull/4633.diff", "html_url": "https://github.com/huggingface/datasets/pull/4633", "merged_at": "2022-07-18T13:07:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4633.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4633" }
2022-07-05T14:18:11Z
https://api.github.com/repos/huggingface/datasets/issues/4633/comments
As reported in https://github.com/huggingface/datasets/issues/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file "contest.py" would be considered part of a test split (it contains "test") and "seqeval.py" as well (it contains "eval"). In this PR I made the pattern matching more robust by only matching split names **between separators**. The supported separators are dots, dashes, spaces and underscores. I updated the docs accordingly. One detail about the tests: I had to update one test because it was using `PurePath.match` as a reference for globbing, but it doesn't support the `[..]` glob pattern. Therefore I added a `mock_fs` context manager that can be used to easily define a dummy filesystem with certain files in it and run pattern matching tests. Its code comes mostly from test_streaming_download_manager.py Close https://github.com/huggingface/datasets/issues/4477
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4633/timeline
closed
false
4,633
null
2022-07-18T13:07:33Z
null
true
1,294,166,880
https://api.github.com/repos/huggingface/datasets/issues/4632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4632/events
[]
null
2023-07-25T15:04:27Z
[]
https://github.com/huggingface/datasets/issues/4632
NONE
completed
null
null
[ "Hi ! `ds.sort()` does sort the full dataset, not just one column:\r\n```python\r\nfrom datasets import *\r\n\r\nds = Dataset.from_dict({\"foo\": [3, 2, 1], \"bar\": [\"c\", \"b\", \"a\"]})\r\nprint(d.sort(\"foo\").to_pandas()\r\n# foo bar\r\n# 0 1 a\r\n# 1 2 b\r\n# 2 3 c\r\n```\r\n\r\nWhat made you think it was not the case ? Did you experience a situation where it was only sorting one column ?", "Hi! thank you for your quick reply!\r\nI wanted to sort the `cnn_dailymail` dataset by the length of the labels (num of characters). I added a new column to the dataset (`ds.add_column`) with the lengths and then sorted by this new column. Only the new length column was sorted, the reset left in their original order. ", "That's unexpected, can you share the code you used to get this ?" ]
'sort' method sorts one column only
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4632/reactions" }
I_kwDODunzps5NI2tg
null
2022-07-05T11:25:26Z
https://api.github.com/repos/huggingface/datasets/issues/4632/comments
The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42108562?v=4", "events_url": "https://api.github.com/users/shachardon/events{/privacy}", "followers_url": "https://api.github.com/users/shachardon/followers", "following_url": "https://api.github.com/users/shachardon/following{/other_user}", "gists_url": "https://api.github.com/users/shachardon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shachardon", "id": 42108562, "login": "shachardon", "node_id": "MDQ6VXNlcjQyMTA4NTYy", "organizations_url": "https://api.github.com/users/shachardon/orgs", "received_events_url": "https://api.github.com/users/shachardon/received_events", "repos_url": "https://api.github.com/users/shachardon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shachardon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shachardon/subscriptions", "type": "User", "url": "https://api.github.com/users/shachardon" }
https://api.github.com/repos/huggingface/datasets/issues/4632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4632/timeline
closed
false
4,632
null
2023-07-25T15:04:27Z
null
false
1,293,545,900
https://api.github.com/repos/huggingface/datasets/issues/4631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4631/events
[]
null
2022-07-07T13:23:32Z
[]
https://github.com/huggingface/datasets/pull/4631
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Update WinoBias README
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4631/reactions" }
PR_kwDODunzps460Vy0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4631.diff", "html_url": "https://github.com/huggingface/datasets/pull/4631", "merged_at": "2022-07-07T13:11:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4631" }
2022-07-04T20:24:40Z
https://api.github.com/repos/huggingface/datasets/issues/4631/comments
I'm adding some information about Winobias that I got from the paper :smile: I think this makes it a bit clearer!
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
https://api.github.com/repos/huggingface/datasets/issues/4631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4631/timeline
closed
false
4,631
null
2022-07-07T13:11:47Z
null
true
1,293,470,728
https://api.github.com/repos/huggingface/datasets/issues/4630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4630/events
[]
null
2022-07-05T15:19:52Z
[]
https://github.com/huggingface/datasets/pull/4630
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4630/reactions" }
PR_kwDODunzps460HFM
{ "diff_url": "https://github.com/huggingface/datasets/pull/4630.diff", "html_url": "https://github.com/huggingface/datasets/pull/4630", "merged_at": "2022-07-05T15:08:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/4630.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4630" }
2022-07-04T18:26:55Z
https://api.github.com/repos/huggingface/datasets/issues/4630/comments
Fix #4612. Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`. Thus, @mariosasko suggested to add the missing part to the module import to allow for its access.
{ "avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4", "events_url": "https://api.github.com/users/gugarosa/events{/privacy}", "followers_url": "https://api.github.com/users/gugarosa/followers", "following_url": "https://api.github.com/users/gugarosa/following{/other_user}", "gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gugarosa", "id": 4120639, "login": "gugarosa", "node_id": "MDQ6VXNlcjQxMjA2Mzk=", "organizations_url": "https://api.github.com/users/gugarosa/orgs", "received_events_url": "https://api.github.com/users/gugarosa/received_events", "repos_url": "https://api.github.com/users/gugarosa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions", "type": "User", "url": "https://api.github.com/users/gugarosa" }
https://api.github.com/repos/huggingface/datasets/issues/4630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4630/timeline
closed
false
4,630
null
2022-07-05T15:08:21Z
null
true
1,293,418,800
https://api.github.com/repos/huggingface/datasets/issues/4629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4629/events
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
null
2022-07-06T15:49:57Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/4629
MEMBER
completed
null
null
[]
Rename repo default branch to main
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4629/reactions" }
I_kwDODunzps5NGAEw
null
2022-07-04T17:16:10Z
https://api.github.com/repos/huggingface/datasets/issues/4629/comments
Rename repository default branch to `main` (instead of current `master`). Once renamed, users will have to manually update their local repos: - [ ] Upstream: ``` git branch -m master main git fetch upstream main git branch -u upstream/main main git remote set-head upstream -a ``` - [ ] Origin: Rename fork default branch as well at: https://github.com/USERNAME/lam/settings/branches Then: ``` git fetch origin main git remote set-head origin -a ``` CC: @sgugger
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4629/timeline
closed
false
4,629
null
2022-07-06T15:49:57Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
1,293,361,308
https://api.github.com/repos/huggingface/datasets/issues/4628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4628/events
[]
null
2022-07-07T14:08:38Z
[]
https://github.com/huggingface/datasets/pull/4628
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Fix time type `_arrow_to_datasets_dtype` conversion
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4628/reactions" }
PR_kwDODunzps46zvFJ
{ "diff_url": "https://github.com/huggingface/datasets/pull/4628.diff", "html_url": "https://github.com/huggingface/datasets/pull/4628", "merged_at": "2022-07-07T13:57:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/4628.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4628" }
2022-07-04T16:20:15Z
https://api.github.com/repos/huggingface/datasets/issues/4628/comments
Fix #4620 The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format. cc @severo
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/4628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4628/timeline
closed
false
4,628
null
2022-07-07T13:57:12Z
null
true
1,293,287,798
https://api.github.com/repos/huggingface/datasets/issues/4627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4627/events
[]
null
2022-07-07T12:41:09Z
[]
https://github.com/huggingface/datasets/pull/4627
CONTRIBUTOR
null
false
null
[ "Great, can open a PR in `evaluate` as well to optimize this.\r\n\r\nRelatedly, I wanted to add a new metric, Kendall Tau (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.kendalltau.html). If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the `datasets` or `evaluate` repo (or both)?\r\n\r\nThanks!", "PR opened in`evaluate` library with same minor adjustment: https://github.com/huggingface/evaluate/pull/176 ", "> If I were to open a PR with the wrapper, description, citation, docstrings, readme, etc. would it make more sense to do that in the datasets or evaluate repo (or both)?\r\n\r\nI think you could just add it to `evaluate`, we're not adding new metrics in this repo anymore" ]
fixed duplicate calculation of spearmanr function in metrics wrapper.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4627/reactions" }
PR_kwDODunzps46zfNa
{ "diff_url": "https://github.com/huggingface/datasets/pull/4627.diff", "html_url": "https://github.com/huggingface/datasets/pull/4627", "merged_at": "2022-07-07T12:41:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4627.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4627" }
2022-07-04T15:02:01Z
https://api.github.com/repos/huggingface/datasets/issues/4627/comments
During _compute, the scipy.stats spearmanr function was called twice, redundantly, once for calculating the score and once for calculating the p-value, under the conditional branch where return_pvalue=True. I adjusted the _compute function to execute the spearmanr function once, store the results tuple in a temporary variable, and then pass the indexed contents to the expected keys of the returned dictionary.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38060297?v=4", "events_url": "https://api.github.com/users/benlipkin/events{/privacy}", "followers_url": "https://api.github.com/users/benlipkin/followers", "following_url": "https://api.github.com/users/benlipkin/following{/other_user}", "gists_url": "https://api.github.com/users/benlipkin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benlipkin", "id": 38060297, "login": "benlipkin", "node_id": "MDQ6VXNlcjM4MDYwMjk3", "organizations_url": "https://api.github.com/users/benlipkin/orgs", "received_events_url": "https://api.github.com/users/benlipkin/received_events", "repos_url": "https://api.github.com/users/benlipkin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benlipkin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benlipkin/subscriptions", "type": "User", "url": "https://api.github.com/users/benlipkin" }
https://api.github.com/repos/huggingface/datasets/issues/4627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4627/timeline
closed
false
4,627
null
2022-07-07T12:41:09Z
null
true
1,293,256,269
https://api.github.com/repos/huggingface/datasets/issues/4626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4626/events
[]
null
2022-07-08T14:27:29Z
[]
https://github.com/huggingface/datasets/issues/4626
MEMBER
null
null
null
[ "yep plus `license_details` also makes sense for this IMO" ]
Add non-commercial licensing info for datasets for which we removed tags
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4626/reactions" }
I_kwDODunzps5NFYZN
null
2022-07-04T14:32:43Z
https://api.github.com/repos/huggingface/datasets/issues/4626/comments
We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753 Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv) We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4626/timeline
open
false
4,626
null
null
null
false
1,293,163,744
https://api.github.com/repos/huggingface/datasets/issues/4625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4625/events
[]
null
2022-07-05T11:11:54Z
[]
https://github.com/huggingface/datasets/pull/4625
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Cool thanks ! Yup it sounds like the right solution.\r\n\r\nIt looks like `_generate_tables` needs to be updated as well to fix the CI" ]
Unpack `dl_manager.iter_files` to allow parallization
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4625/reactions" }
PR_kwDODunzps46zELz
{ "diff_url": "https://github.com/huggingface/datasets/pull/4625.diff", "html_url": "https://github.com/huggingface/datasets/pull/4625", "merged_at": "2022-07-05T11:00:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4625.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4625" }
2022-07-04T13:16:58Z
https://api.github.com/repos/huggingface/datasets/issues/4625/comments
Iterate over data files outside `dl_manager.iter_files` to allow parallelization in streaming mode. (The issue reported [here](https://discuss.huggingface.co/t/dataset-only-have-n-shard-1-when-has-multiple-shards-in-repo/19887)) PS: Another option would be to override `FilesIterable.__getitem__` to make it indexable and check for that type in `_shard_kwargs` and `n_shards,` but IMO this solution adds too much unnecessary complexity.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/4625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4625/timeline
closed
false
4,625
null
2022-07-05T11:00:48Z
null
true
1,293,085,058
https://api.github.com/repos/huggingface/datasets/issues/4624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4624/events
[]
null
2023-09-24T10:05:19Z
[]
https://github.com/huggingface/datasets/pull/4624
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side", "Yup it's maybe better to support it on the Hub side then indeed, thanks ! Closing this one" ]
Remove all paperswithcode_id: null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4624/reactions" }
PR_kwDODunzps46yzOK
{ "diff_url": "https://github.com/huggingface/datasets/pull/4624.diff", "html_url": "https://github.com/huggingface/datasets/pull/4624", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4624.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4624" }
2022-07-04T12:11:32Z
https://api.github.com/repos/huggingface/datasets/issues/4624/comments
On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`: <img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there. To have the validation working again we can simply remove all the `paperswithcode_id: null`. cc @julien-c
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4624/timeline
closed
false
4,624
null
2022-07-04T13:10:38Z
null
true
1,293,042,894
https://api.github.com/repos/huggingface/datasets/issues/4623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4623/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-04T14:40:50Z
[]
https://github.com/huggingface/datasets/issues/4623
NONE
null
null
null
[ "Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ", "So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ", "This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```", "Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!" ]
Loading MNIST as Pytorch Dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4623/reactions" }
I_kwDODunzps5NEkTO
null
2022-07-04T11:33:10Z
https://api.github.com/repos/huggingface/datasets/issues/4623/comments
## Describe the bug Conversion of MNIST dataset to pytorch fails with bug ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mnist", split="train") dataset.set_format('torch') dataset[0] print() ``` ## Expected results Expect to see torch tensors image and label ## Actual results Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module> dataset[0] File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__ return self._getitem( File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem formatted_output = format_table( File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__ return self.format_row(pa_table) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested mapped = [ File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested return function(data_struct) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' python-BaseException ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Windows-10-10.0.22579-SP0 - Python version: 3.9.2 - PyArrow version: 8.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/56592797?v=4", "events_url": "https://api.github.com/users/jameschapman19/events{/privacy}", "followers_url": "https://api.github.com/users/jameschapman19/followers", "following_url": "https://api.github.com/users/jameschapman19/following{/other_user}", "gists_url": "https://api.github.com/users/jameschapman19/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jameschapman19", "id": 56592797, "login": "jameschapman19", "node_id": "MDQ6VXNlcjU2NTkyNzk3", "organizations_url": "https://api.github.com/users/jameschapman19/orgs", "received_events_url": "https://api.github.com/users/jameschapman19/received_events", "repos_url": "https://api.github.com/users/jameschapman19/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jameschapman19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jameschapman19/subscriptions", "type": "User", "url": "https://api.github.com/users/jameschapman19" }
https://api.github.com/repos/huggingface/datasets/issues/4623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4623/timeline
open
false
4,623
null
null
null
false
1,293,031,939
https://api.github.com/repos/huggingface/datasets/issues/4622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4622/events
[]
null
2022-07-15T14:37:23Z
[]
https://github.com/huggingface/datasets/pull/4622
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq @mariosasko pls take a look at https://github.com/huggingface/datasets/pull/4622/commits/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metadata_files` are not empty for the case when `self.config.drop_metadata=True` because I think we should be aligned with the config and preserve labels if `self.config.drop_labels=False` (the default value) and `self.config.drop_metadata=True` but `metadata_files` are passed. This is an extremely unlikely use case (when `self.config.drop_metadata=True`, but `metadata_files` are passed to `_generate_examples()`) since users usually do not use `_generate_examples()` alone but I believe it would be consistent to have the same behavior as in `_splits_generators()`. This change requires change in tests too if we suppose that we want to preserve labels (default value of `self.config.drop_labels` is False) when `self.config.drop_metadata=True`, even if `metadata_files` are for some reason provided (as it is done in tests). \r\n\r\nwdyt about this change?\r\n", "@lhoestq it wouldn't raise an error if we check `example.keys() == {\"image\", \"label\"}` as test checks only `_generate_examples`, not `encode_example`. and in the implementation of this PR `_generate_examples` would return both `image` and `label` key in the case when `drop_metadata=True` and `drop_labels=False` (default) as it seems that we agreed on that :)", "and on the other hand it would raise an error if `label` column is missing in _generate_examples when `drop_metadata=True` and `drop_labels=False`\r\n\r\nby \"it\" i mean tests :D (`test_generate_examples_with_metadata_that_misses_one_image`, `test_generate_examples_with_metadata_in_wrong_location` and `test_generate_examples_drop_metadata`)", "Perhaps we could make `self.config.drop_metadata = None` and `self.config.drop_labels = None` the defaults to see explicitly what the user wants. This would then turn into `self.config.drop_metadata = False` and `self.config.drop_labels = True` if metadata files are present and `self.config.drop_metadata = True` and `self.config.drop_labels = False` if not. And if the user wants to have the `label` column alongside metadata columns, it can do so by passing `drop_labels = False` explicitely (in that scenario we have to check that the `label` column is not already present in metadata files). And maybe we can also improve the logging messages.\r\n\r\nI find it problematic that the current implementation drops labels in some scenarios even if `self.config.drop_labels = False`, and the user doesn't have control over this behavior.\r\n\r\nLet me know what you think." ]
Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4622/reactions" }
PR_kwDODunzps46ynmT
{ "diff_url": "https://github.com/huggingface/datasets/pull/4622.diff", "html_url": "https://github.com/huggingface/datasets/pull/4622", "merged_at": "2022-07-15T14:24:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4622.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4622" }
2022-07-04T11:23:20Z
https://api.github.com/repos/huggingface/datasets/issues/4622/comments
Will fix #4621 ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following condition doesn't pass: https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/imagefolder/imagefolder.py#L167 So I suggest to double check it inside `analyze()` not to collect metadata files if they are not needed. (and labels too, to be consistent) --- Also, I added a test to check if labels are inferred correctly from directories names in general (because we didn't have it) :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://api.github.com/repos/huggingface/datasets/issues/4622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4622/timeline
closed
false
4,622
null
2022-07-15T14:24:24Z
null
true
1,293,030,128
https://api.github.com/repos/huggingface/datasets/issues/4621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4621/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-15T14:24:24Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" } ]
https://github.com/huggingface/datasets/issues/4621
CONTRIBUTOR
completed
null
null
[]
ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4621/reactions" }
I_kwDODunzps5NEhLw
null
2022-07-04T11:21:44Z
https://api.github.com/repos/huggingface/datasets/issues/4621/comments
## Describe the bug If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either. ## Steps to reproduce the bug ### Clone an example dataset from the Hub ```bash git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata ``` ### Try to load it ```python from datasets import load_dataset ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False) ``` or even just ```python ds = load_dataset("test-imagefolder-metadata", drop_metadata=True) ``` as `drop_labels=False` is a default value. ## Expected results A DatasetDict object with two features: `"image"` and `"label"`. ## Actual results ``` Traceback (most recent call last): File "/home/polina/workspace/datasets/debug.py", line 18, in <module> ds = load_dataset( File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset builder_instance.download_and_prepare( File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split example = self.info.features.encode_example(record) File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example return encode_nested_example(self, example) File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example { File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp> { File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict yield key, tuple(d[key] for d in dicts) File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'label' ``` ## Environment info `datasets` master branch - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://api.github.com/repos/huggingface/datasets/issues/4621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4621/timeline
closed
false
4,621
null
2022-07-15T14:24:24Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
false
1,292,797,878
https://api.github.com/repos/huggingface/datasets/issues/4620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4620/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-07T13:57:11Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
https://github.com/huggingface/datasets/issues/4620
CONTRIBUTOR
completed
null
null
[ "cc @mariosasko ", "Hi, thanks for reporting! I'm investigating the issue." ]
Data type is not recognized when using datetime.time
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4620/reactions" }
I_kwDODunzps5NDoe2
null
2022-07-04T08:13:38Z
https://api.github.com/repos/huggingface/datasets/issues/4620/comments
## Describe the bug Creating a dataset from a pandas dataframe with `datetime.time` format generates an error. ## Steps to reproduce the bug ```python import pandas as pd from datetime import time from datasets import Dataset df = pd.DataFrame({"feature_name": [time(1, 1, 1)]}) dataset = Dataset.from_pandas(df) ``` ## Expected results The dataset should be created. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 823, in from_pandas return cls(table, info=info, split=split) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 679, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1315, in generate_from_arrow_type return Value(dtype=_arrow_to_datasets_dtype(pa_type)) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 83, in _arrow_to_datasets_dtype return f"time64[{arrow_type.unit}]" AttributeError: 'pyarrow.lib.DataType' object has no attribute 'unit' ``` ## Environment info - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
https://api.github.com/repos/huggingface/datasets/issues/4620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4620/timeline
closed
false
4,620
null
2022-07-07T13:57:11Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
false
1,292,107,275
https://api.github.com/repos/huggingface/datasets/issues/4619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4619/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-03T20:27:07Z
[]
https://github.com/huggingface/datasets/issues/4619
NONE
null
null
null
[ "If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```", "I see, thanks! Any idea if the default numpy → list conversion might cause precision loss?", "I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved." ]
np arrays get turned into native lists
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4619/reactions" }
I_kwDODunzps5NA_4L
null
2022-07-02T17:54:57Z
https://api.github.com/repos/huggingface/datasets/issues/4619/comments
## Describe the bug When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen? ## Steps to reproduce the bug ```python >>> import datasets, numpy as np >>> dataset = datasets.load_dataset("glue", "mrpc")["validation"] Reusing dataset glue (...) 100%|███████████████████████████████████████████████| 3/3 [00:00<00:00, 1360.61it/s] >>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False) 100%|██████████████████████████████████████████| 408/408 [00:00<00:00, 10819.97ex/s] >>> dataset2[0]["tmp"] [0.5] >>> type(dataset2[0]["tmp"]) <class 'list'> ``` ## Expected results `dataset2[0]["tmp"]` should be an `np.ndarray`. ## Actual results It's a list. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: mac, though I'm pretty sure it happens on a linux machine too - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZhaofengWu", "id": 11954789, "login": "ZhaofengWu", "node_id": "MDQ6VXNlcjExOTU0Nzg5", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "type": "User", "url": "https://api.github.com/users/ZhaofengWu" }
https://api.github.com/repos/huggingface/datasets/issues/4619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4619/timeline
open
false
4,619
null
null
null
false
1,292,078,225
https://api.github.com/repos/huggingface/datasets/issues/4618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4618/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2022-07-21T14:10:44Z
[]
https://github.com/huggingface/datasets/issues/4618
NONE
null
null
null
[ "Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?", "@mariosasko sounds good to me!\r\n", "Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script", "1. Like this: `load_dataset(\"hf-loaders/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader." ]
contribute data loading for object detection datasets with yolo data format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4618/reactions" }
I_kwDODunzps5NA4yR
null
2022-07-02T15:21:59Z
https://api.github.com/repos/huggingface/datasets/issues/4618/comments
**Is your feature request related to a problem? Please describe.** At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/2)) **Describe the solution you'd like** I wrote a [custom script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) to load dataset which has YOLO data format. **Describe alternatives you've considered** The script can either be a standalone dataset builder, or a modified version of `ImageFolder` **Additional context** I would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching 😄
{ "avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4", "events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}", "followers_url": "https://api.github.com/users/faizankshaikh/followers", "following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}", "gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/faizankshaikh", "id": 8406903, "login": "faizankshaikh", "node_id": "MDQ6VXNlcjg0MDY5MDM=", "organizations_url": "https://api.github.com/users/faizankshaikh/orgs", "received_events_url": "https://api.github.com/users/faizankshaikh/received_events", "repos_url": "https://api.github.com/users/faizankshaikh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions", "type": "User", "url": "https://api.github.com/users/faizankshaikh" }
https://api.github.com/repos/huggingface/datasets/issues/4618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4618/timeline
open
false
4,618
null
null
null
false
1,291,307,428
https://api.github.com/repos/huggingface/datasets/issues/4615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4615/events
[]
null
2022-07-08T12:13:10Z
[]
https://github.com/huggingface/datasets/pull/4615
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Fix `embed_storage` on features inside lists/sequences
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4615/reactions" }
PR_kwDODunzps46tADt
{ "diff_url": "https://github.com/huggingface/datasets/pull/4615.diff", "html_url": "https://github.com/huggingface/datasets/pull/4615", "merged_at": "2022-07-08T12:01:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/4615.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4615" }
2022-07-01T11:52:08Z
https://api.github.com/repos/huggingface/datasets/issues/4615/comments
Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general). Fix #4591 ~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done!
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/4615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4615/timeline
closed
false
4,615
null
2022-07-08T12:01:36Z
null
true
1,291,218,020
https://api.github.com/repos/huggingface/datasets/issues/4614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4614/events
[]
null
2022-07-19T13:48:45Z
[]
https://github.com/huggingface/datasets/pull/4614
CONTRIBUTOR
null
false
null
[ "Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.", "_The documentation is not available anymore as the PR was closed or merged._" ]
Ensure ConcatenationTable.cast uses target_schema metadata
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4614/reactions" }
PR_kwDODunzps46ssfw
{ "diff_url": "https://github.com/huggingface/datasets/pull/4614.diff", "html_url": "https://github.com/huggingface/datasets/pull/4614", "merged_at": "2022-07-19T13:36:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4614.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4614" }
2022-07-01T10:22:08Z
https://api.github.com/repos/huggingface/datasets/issues/4614/comments
Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable. Code example of where issue arrises: ``` from datasets import Dataset, Image column1 = [0, 1] image_paths = ['/images/image1.jpg', '/images/image2.jpg'] ds = Dataset.from_dict({"column1": column1}) ds = ds.add_column("image", image_paths) ds.cast_column("image", Image()) # Fails here ``` Output ``` ... TypeError: Couldn't cast array of type string to {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8114067?v=4", "events_url": "https://api.github.com/users/dtuit/events{/privacy}", "followers_url": "https://api.github.com/users/dtuit/followers", "following_url": "https://api.github.com/users/dtuit/following{/other_user}", "gists_url": "https://api.github.com/users/dtuit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dtuit", "id": 8114067, "login": "dtuit", "node_id": "MDQ6VXNlcjgxMTQwNjc=", "organizations_url": "https://api.github.com/users/dtuit/orgs", "received_events_url": "https://api.github.com/users/dtuit/received_events", "repos_url": "https://api.github.com/users/dtuit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dtuit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dtuit/subscriptions", "type": "User", "url": "https://api.github.com/users/dtuit" }
https://api.github.com/repos/huggingface/datasets/issues/4614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4614/timeline
closed
false
4,614
null
2022-07-19T13:36:24Z
null
true
1,291,181,193
https://api.github.com/repos/huggingface/datasets/issues/4613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4613/events
[]
null
2022-07-01T12:53:57Z
[]
https://github.com/huggingface/datasets/pull/4613
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you thank you! Let's merge and pray? 😱 ", "I just need to add `license_details` to the validator and yup we can merge" ]
Align/fix license metadata info
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4613/reactions" }
PR_kwDODunzps46skd6
{ "diff_url": "https://github.com/huggingface/datasets/pull/4613.diff", "html_url": "https://github.com/huggingface/datasets/pull/4613", "merged_at": "2022-07-01T12:42:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4613.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4613" }
2022-07-01T09:50:50Z
https://api.github.com/repos/huggingface/datasets/issues/4613/comments
fix bad "other-*" licenses and add the corresponding "license_details" when relevant
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
https://api.github.com/repos/huggingface/datasets/issues/4613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4613/timeline
closed
false
4,613
null
2022-07-01T12:42:47Z
null
true
1,290,984,660
https://api.github.com/repos/huggingface/datasets/issues/4612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4612/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-05T15:08:21Z
[]
https://github.com/huggingface/datasets/issues/4612
NONE
completed
null
null
[ "Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.", "Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?", "Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!" ]
Release 2.3.0 broke custom iterable datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4612/reactions" }
I_kwDODunzps5M8tzU
null
2022-07-01T06:46:07Z
https://api.github.com/repos/huggingface/datasets/issues/4612/comments
## Describe the bug Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0. ## Steps to reproduce the bug ```python next(iter(custom_iterable_dataset)) ``` ## Expected results `next(iter(custom_iterable_dataset))` should return examples from the dataset ## Actual results ``` /usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess() 16 See https://github.com/fsspec/gcsfs/issues/379 17 """ ---> 18 fsspec.asyn.iothread[0] = None 19 fsspec.asyn.loop[0] = None 20 AttributeError: module 'fsspec' has no attribute 'asyn' ``` ## Environment info - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4", "events_url": "https://api.github.com/users/aapot/events{/privacy}", "followers_url": "https://api.github.com/users/aapot/followers", "following_url": "https://api.github.com/users/aapot/following{/other_user}", "gists_url": "https://api.github.com/users/aapot/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aapot", "id": 19529125, "login": "aapot", "node_id": "MDQ6VXNlcjE5NTI5MTI1", "organizations_url": "https://api.github.com/users/aapot/orgs", "received_events_url": "https://api.github.com/users/aapot/received_events", "repos_url": "https://api.github.com/users/aapot/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aapot/subscriptions", "type": "User", "url": "https://api.github.com/users/aapot" }
https://api.github.com/repos/huggingface/datasets/issues/4612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4612/timeline
closed
false
4,612
null
2022-07-05T15:08:21Z
null
false
1,290,940,874
https://api.github.com/repos/huggingface/datasets/issues/4611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4611/events
[]
null
2022-07-01T16:59:11Z
[]
https://github.com/huggingface/datasets/pull/4611
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Preserve member order by MockDownloadManager.iter_archive
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4611/reactions" }
PR_kwDODunzps46rxIX
{ "diff_url": "https://github.com/huggingface/datasets/pull/4611.diff", "html_url": "https://github.com/huggingface/datasets/pull/4611", "merged_at": "2022-07-01T16:48:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/4611.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4611" }
2022-07-01T05:48:20Z
https://api.github.com/repos/huggingface/datasets/issues/4611/comments
Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive. See issue in: - https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027 This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4611/timeline
closed
false
4,611
null
2022-07-01T16:48:28Z
null
true
1,290,603,827
https://api.github.com/repos/huggingface/datasets/issues/4610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4610/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-05T14:24:13Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/4610
NONE
completed
null
null
[ "I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_files.py#L547\r\n\r\n@mariosasko could you please confirm my finding? And are there any changes that need to be done from my side?", "Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it", "> Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it\n\nI can't wait for that releasee. Broke my application", "This simple workaround should fix: https://huggingface.co/datasets/codeparrot/github-code/discussions/2\r\n\r\n`get_patterns_in_dataset_repository` can treat whether `base_path=None`, so we just need to make sure that codeparrot/github-code `_split_generators` calls with such an argument.", "I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ? \r\n@lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?", "Actually I think it's just simpler to fix it in the dataset itself, let me open a PR\r\n\r\nEDIT: PR opened here: https://huggingface.co/datasets/codeparrot/github-code/discussions/3", "PR is merged, it's working now ! Closing this one :)", "> I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ?\r\n> @lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?\r\n\r\nYou are definitely right, sorry about it. I always keep forgetting that we need to keep in mind users from past versions, my bad." ]
codeparrot/github-code failing to load
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4610/reactions" }
I_kwDODunzps5M7Q0z
null
2022-06-30T20:24:48Z
https://api.github.com/repos/huggingface/datasets/issues/4610/comments
## Describe the bug codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'` ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results loaded dataset object ## Actual results ```python [3]: dataset = load_dataset("codeparrot/github-code") No config specified, defaulting to: github-code/all-all Downloading and preparing dataset github-code/all-all to /home/bebr/.cache/huggingface/datasets/codeparrot___github-code/all-all/0.0.0/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [3], in <cell line: 1>() ----> 1 dataset = load_dataset("codeparrot/github-code") File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1678 # Download and prepare data -> 1679 builder_instance.download_and_prepare( 1680 download_config=download_config, 1681 download_mode=download_mode, 1682 ignore_verifications=ignore_verifications, 1683 try_from_hf_gcs=try_from_hf_gcs, 1684 use_auth_token=use_auth_token, 1685 ) 1687 # Build dataset for splits 1688 keep_in_memory = ( 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1690 ) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1220 def _download_and_prepare(self, dl_manager, verify_infos): -> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File ~/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--github-code/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817/github-code.py:169, in GithubCode._split_generators(self, dl_manager) 162 def _split_generators(self, dl_manager): 164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info( 165 _REPO_NAME, 166 timeout=100.0, 167 ) --> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info) 170 data_files = datasets.data_files.DataFilesDict.from_hf_repo( 171 patterns, 172 dataset_info=hfh_dataset_info, 173 ) 175 files = dl_manager.download_and_extract(data_files["train"]) TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/29863388?v=4", "events_url": "https://api.github.com/users/PyDataBlog/events{/privacy}", "followers_url": "https://api.github.com/users/PyDataBlog/followers", "following_url": "https://api.github.com/users/PyDataBlog/following{/other_user}", "gists_url": "https://api.github.com/users/PyDataBlog/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PyDataBlog", "id": 29863388, "login": "PyDataBlog", "node_id": "MDQ6VXNlcjI5ODYzMzg4", "organizations_url": "https://api.github.com/users/PyDataBlog/orgs", "received_events_url": "https://api.github.com/users/PyDataBlog/received_events", "repos_url": "https://api.github.com/users/PyDataBlog/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PyDataBlog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PyDataBlog/subscriptions", "type": "User", "url": "https://api.github.com/users/PyDataBlog" }
https://api.github.com/repos/huggingface/datasets/issues/4610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4610/timeline
closed
false
4,610
null
2022-07-05T09:19:56Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
1,290,392,083
https://api.github.com/repos/huggingface/datasets/issues/4609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4609/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-12T21:44:32Z
[]
https://github.com/huggingface/datasets/issues/4609
NONE
completed
null
null
[ "Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.", "Hi,\r\n\r\nThat's a great help. Thank you very much." ]
librispeech dataset has to download whole subset when specifing the split to use
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4609/reactions" }
I_kwDODunzps5M6dIT
null
2022-06-30T16:38:24Z
https://api.github.com/repos/huggingface/datasets/issues/4609/comments
## Describe the bug librispeech dataset has to download whole subset when specifing the split to use ## Steps to reproduce the bug see below # Sample code to reproduce the bug ``` !pip install datasets from datasets import load_dataset raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100") ``` ## Expected results The split "train.clean.100" is downloaded. ## Actual results All four splits in "clean" subset is downloaded. ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4", "events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}", "followers_url": "https://api.github.com/users/sunhaozhepy/followers", "following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}", "gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sunhaozhepy", "id": 73462159, "login": "sunhaozhepy", "node_id": "MDQ6VXNlcjczNDYyMTU5", "organizations_url": "https://api.github.com/users/sunhaozhepy/orgs", "received_events_url": "https://api.github.com/users/sunhaozhepy/received_events", "repos_url": "https://api.github.com/users/sunhaozhepy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions", "type": "User", "url": "https://api.github.com/users/sunhaozhepy" }
https://api.github.com/repos/huggingface/datasets/issues/4609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4609/timeline
closed
false
4,609
null
2022-07-12T21:44:32Z
null
false
1,290,298,002
https://api.github.com/repos/huggingface/datasets/issues/4608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4608/events
[]
null
2022-07-06T12:45:59Z
[]
https://github.com/huggingface/datasets/pull/4608
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added tests for xisfile, xgetsize, xlistdir and xglob for private repos, and also tests for xwalk that was untested" ]
Fix xisfile, xgetsize, xisdir, xlistdir in private repo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4608/reactions" }
PR_kwDODunzps46pm9A
{ "diff_url": "https://github.com/huggingface/datasets/pull/4608.diff", "html_url": "https://github.com/huggingface/datasets/pull/4608", "merged_at": "2022-07-06T12:34:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/4608.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4608" }
2022-06-30T15:23:21Z
https://api.github.com/repos/huggingface/datasets/issues/4608/comments
`xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. This is because the authentication headers are not passed correctly in this case. This is causing dataset streaming to fail in private parquet repositories, as noted in https://github.com/huggingface/datasets/issues/4605 I fixed `xisfile` and the other functions that behave the same way: xgetsize, xisdir and xlistdir TODO: - [x] tests fix https://github.com/huggingface/datasets/issues/4605
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4608/timeline
closed
false
4,608
null
2022-07-06T12:34:19Z
null
true
1,290,171,941
https://api.github.com/repos/huggingface/datasets/issues/4607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4607/events
[]
null
2022-07-01T12:00:37Z
[]
https://github.com/huggingface/datasets/pull/4607
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I just set a default value (None) for the deprecated licenses and languages fields, which should fix most of the CI failures.\r\n\r\nNote that the CI should still be red because you edited many dataset cards and they're still missing some content - but this is unrelated to this PR so we can ignore these failures", "thanks so much @lhoestq !!", "There's also a follow-up PR to this one, in #4613 – I would suggest to merge all of them at the same time and hope not too many things are broken 🙀 🙀 ", "Alright merging this one now, let's see how broken things get" ]
Align more metadata with other repo types (models,spaces)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4607/reactions" }
PR_kwDODunzps46pLnd
{ "diff_url": "https://github.com/huggingface/datasets/pull/4607.diff", "html_url": "https://github.com/huggingface/datasets/pull/4607", "merged_at": "2022-07-01T11:49:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/4607.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4607" }
2022-06-30T13:52:12Z
https://api.github.com/repos/huggingface/datasets/issues/4607/comments
see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged)
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
https://api.github.com/repos/huggingface/datasets/issues/4607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4607/timeline
closed
false
4,607
null
2022-07-01T11:49:14Z
null
true
1,290,083,534
https://api.github.com/repos/huggingface/datasets/issues/4606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4606/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2023-07-25T15:05:26Z
[]
https://github.com/huggingface/datasets/issues/4606
NONE
completed
null
null
[ "Hi! The GH/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `revision=\"2.2.0\"`) to `load_dataset.`\r\n" ]
evaluation result changes after `datasets` version change
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4606/reactions" }
I_kwDODunzps5M5RzO
null
2022-06-30T12:43:26Z
https://api.github.com/repos/huggingface/datasets/issues/4606/comments
## Describe the bug evaluation result changes after `datasets` version change ## Steps to reproduce the bug 1. Train a model on WikiAnn 2. reload the ckpt -> test accuracy becomes same as eval accuracy 3. such behavior is gone after downgrading `datasets` https://colab.research.google.com/drive/1kYz7-aZRGdayaq-gDTt30tyEgsKlpYOw?usp=sharing ## Expected results evaluation result shouldn't change before/after `datasets` version changes ## Actual results evaluation result changes before/after `datasets` version changes ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: colab - Python version: 3.7.13 - PyArrow version: 6.0.1 Q. How could the evaluation result change before/after `datasets` version changes?
{ "avatar_url": "https://avatars.githubusercontent.com/u/70014488?v=4", "events_url": "https://api.github.com/users/thnkinbtfly/events{/privacy}", "followers_url": "https://api.github.com/users/thnkinbtfly/followers", "following_url": "https://api.github.com/users/thnkinbtfly/following{/other_user}", "gists_url": "https://api.github.com/users/thnkinbtfly/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thnkinbtfly", "id": 70014488, "login": "thnkinbtfly", "node_id": "MDQ6VXNlcjcwMDE0NDg4", "organizations_url": "https://api.github.com/users/thnkinbtfly/orgs", "received_events_url": "https://api.github.com/users/thnkinbtfly/received_events", "repos_url": "https://api.github.com/users/thnkinbtfly/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thnkinbtfly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thnkinbtfly/subscriptions", "type": "User", "url": "https://api.github.com/users/thnkinbtfly" }
https://api.github.com/repos/huggingface/datasets/issues/4606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4606/timeline
closed
false
4,606
null
2023-07-25T15:05:26Z
null
false
1,290,058,970
https://api.github.com/repos/huggingface/datasets/issues/4605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4605/events
[ { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
null
2022-07-06T12:34:19Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/4605
NONE
completed
null
null
[ "Yes, this dataset is \"gated\": you first have to go to https://huggingface.co/datasets/boris/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).", "I already did that, it returns error when using streaming", "Oh, sorry, I misread. Looking at it. Maybe @huggingface/datasets or @SBrandeis ", "I could reproduce the error, even though I provided my token and accepted the gate form. It looks like an error from `datasets`", "This is indeed a bug in `datasets`. Parquet datasets in gated/private repositories can't be streamed properly, which caused the viewer to fail. I opened a PR at https://github.com/huggingface/datasets/pull/4608" ]
Dataset Viewer issue for boris/gis_filtered
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4605/reactions" }
I_kwDODunzps5M5Lza
null
2022-06-30T12:23:34Z
https://api.github.com/repos/huggingface/datasets/issues/4605/comments
### Link https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train ### Description When I try to access this from the website I get this error: Status code: 400 Exception: ClientResponseError Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/boris/gis_filtered/resolve/80b805053ce61d4eb487b6b8d9095d775c2c466e/data/train/0000.parquet') If I try to load with code I also get the same issue: ```python dataset2_train=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"],split="train",streaming=True) dataset2_validation=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"], split="validation",streaming=True) ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4", "events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}", "followers_url": "https://api.github.com/users/WaterKnight1998/followers", "following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}", "gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WaterKnight1998", "id": 41203448, "login": "WaterKnight1998", "node_id": "MDQ6VXNlcjQxMjAzNDQ4", "organizations_url": "https://api.github.com/users/WaterKnight1998/orgs", "received_events_url": "https://api.github.com/users/WaterKnight1998/received_events", "repos_url": "https://api.github.com/users/WaterKnight1998/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions", "type": "User", "url": "https://api.github.com/users/WaterKnight1998" }
https://api.github.com/repos/huggingface/datasets/issues/4605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4605/timeline
closed
false
4,605
null
2022-07-06T12:34:19Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
1,289,963,962
https://api.github.com/repos/huggingface/datasets/issues/4604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4604/events
[]
null
2022-06-30T13:33:11Z
[]
https://github.com/huggingface/datasets/pull/4604
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Update CI Windows orb
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4604/reactions" }
PR_kwDODunzps46oeju
{ "diff_url": "https://github.com/huggingface/datasets/pull/4604.diff", "html_url": "https://github.com/huggingface/datasets/pull/4604", "merged_at": "2022-06-30T13:22:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/4604.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4604" }
2022-06-30T11:00:31Z
https://api.github.com/repos/huggingface/datasets/issues/4604/comments
This PR tries to fix recurrent random CI failures on Windows. After 2 runs, it seems to have fixed the issue. Fix #4603.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4604/timeline
closed
false
4,604
null
2022-06-30T13:22:26Z
null
true
1,289,963,331
https://api.github.com/repos/huggingface/datasets/issues/4603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4603/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-06-30T13:22:25Z
[]
https://github.com/huggingface/datasets/issues/4603
MEMBER
completed
null
null
[]
CI fails recurrently and randomly on Windows
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4603/reactions" }
I_kwDODunzps5M40dD
null
2022-06-30T10:59:58Z
https://api.github.com/repos/huggingface/datasets/issues/4603/comments
As reported by @lhoestq, The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs: ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4603/timeline
closed
false
4,603
null
2022-06-30T13:22:25Z
null
false
1,289,950,379
https://api.github.com/repos/huggingface/datasets/issues/4602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4602/events
[]
null
2023-09-24T10:05:10Z
[]
https://github.com/huggingface/datasets/pull/4602
MEMBER
null
true
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Upgrade setuptools in windows CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4602/reactions" }
PR_kwDODunzps46obqi
{ "diff_url": "https://github.com/huggingface/datasets/pull/4602.diff", "html_url": "https://github.com/huggingface/datasets/pull/4602", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4602.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4602" }
2022-06-30T10:48:41Z
https://api.github.com/repos/huggingface/datasets/issues/4602/comments
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ``` hopefully this fixes the issue
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4602/timeline
closed
false
4,602
null
2022-06-30T12:46:17Z
null
true
1,289,924,715
https://api.github.com/repos/huggingface/datasets/issues/4601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4601/events
[]
null
2023-09-24T10:04:25Z
[]
https://github.com/huggingface/datasets/pull/4601
MEMBER
null
true
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "It failed terribly" ]
Upgrade pip in WIN CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4601/reactions" }
PR_kwDODunzps46oWF8
{ "diff_url": "https://github.com/huggingface/datasets/pull/4601.diff", "html_url": "https://github.com/huggingface/datasets/pull/4601", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4601.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4601" }
2022-06-30T10:25:42Z
https://api.github.com/repos/huggingface/datasets/issues/4601/comments
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ``` I tried to update pip and re-run the CI several times and I couldn't re-experience this issue for now, so I think upgrading pip may solve the issue
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4601/timeline
closed
false
4,601
null
2022-06-30T10:43:38Z
null
true
1,289,177,042
https://api.github.com/repos/huggingface/datasets/issues/4600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4600/events
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
null
2022-07-04T17:41:20Z
[]
https://github.com/huggingface/datasets/pull/4600
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Remove multiple config section
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4600/reactions" }
PR_kwDODunzps46l3P1
{ "diff_url": "https://github.com/huggingface/datasets/pull/4600.diff", "html_url": "https://github.com/huggingface/datasets/pull/4600", "merged_at": "2022-07-04T17:29:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4600.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4600" }
2022-06-29T19:09:21Z
https://api.github.com/repos/huggingface/datasets/issues/4600/comments
This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
https://api.github.com/repos/huggingface/datasets/issues/4600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4600/timeline
closed
false
4,600
null
2022-07-04T17:29:41Z
null
true
1,288,849,933
https://api.github.com/repos/huggingface/datasets/issues/4599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4599/events
[ { "color": "E3165C", "default": false, "description": "", "id": 4190228726, "name": "transfer-to-evaluate", "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate" } ]
null
2022-09-23T07:42:40Z
[]
https://github.com/huggingface/datasets/pull/4599
NONE
null
false
null
[ "Thanks @Aktsvigun for your fix.\r\n\r\nHowever, metrics in `datasets` are in deprecation mode:\r\n- #4739\r\n\r\nYou should transfer this PR to the `evaluate` library: https://github.com/huggingface/evaluate\r\n\r\nJust for context, here the link to the PR by @Aktsvigun on tensorflow/nmt:\r\n- https://github.com/tensorflow/nmt/pull/488" ]
Smooth-BLEU bug fixed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4599/reactions" }
PR_kwDODunzps46kvfC
{ "diff_url": "https://github.com/huggingface/datasets/pull/4599.diff", "html_url": "https://github.com/huggingface/datasets/pull/4599", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4599.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4599" }
2022-06-29T14:51:42Z
https://api.github.com/repos/huggingface/datasets/issues/4599/comments
Hi, the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image). This however contradicts the source paper suggesting the smooth-BLEU _(Chin-Yew Lin, Franz Josef Och. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004.)_ : > Add one count to the n-gram hit and total ngram count for n > 1. Therefore, for candidate translations with less than n words, they can still get a positive smoothed BLEU score from shorter n-gram matches; however if nothing matches then they will get zero scores. This pull request aims at fixing this bug. I made a pull request in the target repository `tensorflow/nmt`, which implements this script, yet the last commit there is dating 19.02.2019 and I doubt whether this will be fixed promptly. Yet, this bug is critical, for instance for summarization datasets with short summaries (e.g. AESLC), since smoothing needs to be applied there. Therefore, the easiest solution that I found is to fork the repo and download this script directly from the forked fixed repo. Kind, Akim Tsvigun <img width="516" alt="Снимок экрана 2022-06-29 в 17 49 27" src="https://user-images.githubusercontent.com/36672861/176466935-ac579e6d-6a93-4111-ab41-9b33056e7d47.png">
{ "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aktsvigun", "id": 36672861, "login": "Aktsvigun", "node_id": "MDQ6VXNlcjM2NjcyODYx", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "type": "User", "url": "https://api.github.com/users/Aktsvigun" }
https://api.github.com/repos/huggingface/datasets/issues/4599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4599/timeline
closed
false
4,599
null
2022-09-23T07:42:40Z
null
true
1,288,774,514
https://api.github.com/repos/huggingface/datasets/issues/4598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4598/events
[]
null
2022-07-01T09:41:14Z
[]
https://github.com/huggingface/datasets/pull/4598
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Host financial_phrasebank data on the Hub
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4598/reactions" }
PR_kwDODunzps46kfOS
{ "diff_url": "https://github.com/huggingface/datasets/pull/4598.diff", "html_url": "https://github.com/huggingface/datasets/pull/4598", "merged_at": "2022-07-01T09:29:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/4598.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4598" }
2022-06-29T13:59:31Z
https://api.github.com/repos/huggingface/datasets/issues/4598/comments
Fix #4597.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4598/timeline
closed
false
4,598
null
2022-07-01T09:29:36Z
null
true
1,288,672,007
https://api.github.com/repos/huggingface/datasets/issues/4597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4597/events
[ { "color": "8B51EF", "default": false, "description": "", "id": 4069435429, "name": "hosted-on-google-drive", "node_id": "LA_kwDODunzps7yjqgl", "url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive" } ]
null
2022-07-01T09:29:36Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4597
MEMBER
completed
null
null
[ "cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)", "Let's see if their license allows hosting their data on the Hub.", "License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub." ]
Streaming issue for financial_phrasebank
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4597/reactions" }
I_kwDODunzps5Mz5MH
null
2022-06-29T12:45:43Z
https://api.github.com/repos/huggingface/datasets/issues/4597/comments
### Link https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train ### Description As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset: ``` Server error Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4597/timeline
closed
false
4,597
null
2022-07-01T09:29:36Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,288,381,735
https://api.github.com/repos/huggingface/datasets/issues/4596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4596/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-09-07T11:29:28Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
https://github.com/huggingface/datasets/issues/4596
NONE
completed
null
null
[ "Thanks, looking at it!", "Finally fixed! We updated the dataset viewer and it fixed the issue.\r\n\r\nhttps://huggingface.co/datasets/universal_dependencies/viewer/aqz_tudet/train\r\n\r\n<img width=\"1561\" alt=\"Capture d’écran 2022-09-07 à 13 29 18\" src=\"https://user-images.githubusercontent.com/1676121/188867795-4f7dd438-d4f2-46cd-8a92-20a37fb2d6bc.png\">\r\n" ]
Dataset Viewer issue for universal_dependencies
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions" }
I_kwDODunzps5MyyUn
null
2022-06-29T08:50:29Z
https://api.github.com/repos/huggingface/datasets/issues/4596/comments
### Link https://huggingface.co/datasets/universal_dependencies ### Description invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0 ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/16034009?v=4", "events_url": "https://api.github.com/users/Jordy-VL/events{/privacy}", "followers_url": "https://api.github.com/users/Jordy-VL/followers", "following_url": "https://api.github.com/users/Jordy-VL/following{/other_user}", "gists_url": "https://api.github.com/users/Jordy-VL/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jordy-VL", "id": 16034009, "login": "Jordy-VL", "node_id": "MDQ6VXNlcjE2MDM0MDA5", "organizations_url": "https://api.github.com/users/Jordy-VL/orgs", "received_events_url": "https://api.github.com/users/Jordy-VL/received_events", "repos_url": "https://api.github.com/users/Jordy-VL/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jordy-VL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jordy-VL/subscriptions", "type": "User", "url": "https://api.github.com/users/Jordy-VL" }
https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4596/timeline
closed
false
4,596
null
2022-09-07T11:29:27Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
false
1,288,275,976
https://api.github.com/repos/huggingface/datasets/issues/4595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4595/events
[]
null
2022-06-29T08:29:41Z
[]
https://github.com/huggingface/datasets/issues/4595
CONTRIBUTOR
completed
null
null
[ "The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/cakiki/rosetta-code/discussions\r\n", "This was indeed a scraping issue which I assumed was a display issue; sorry about that!" ]
Dataset Viewer issue with False positive PII redaction
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4595/reactions" }
I_kwDODunzps5MyYgI
null
2022-06-29T07:15:57Z
https://api.github.com/repos/huggingface/datasets/issues/4595/comments
### Link https://huggingface.co/datasets/cakiki/rosetta-code ### Description Hello, I just noticed an entry being redacted that shouldn't have been: `RootMeanSquare@Range[10]` is being displayed as `[email protected][10]` ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki" }
https://api.github.com/repos/huggingface/datasets/issues/4595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4595/timeline
closed
false
4,595
null
2022-06-29T08:27:49Z
null
false
1,288,070,023
https://api.github.com/repos/huggingface/datasets/issues/4594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4594/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-06-29T04:03:44Z
[]
https://github.com/huggingface/datasets/issues/4594
NONE
not_planned
null
null
[]
load_from_disk suggests incorrect fix when used to load DatasetDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4594/reactions" }
I_kwDODunzps5MxmOH
null
2022-06-29T01:40:01Z
https://api.github.com/repos/huggingface/datasets/issues/4594/comments
Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?
{ "avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4", "events_url": "https://api.github.com/users/dvsth/events{/privacy}", "followers_url": "https://api.github.com/users/dvsth/followers", "following_url": "https://api.github.com/users/dvsth/following{/other_user}", "gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dvsth", "id": 11157811, "login": "dvsth", "node_id": "MDQ6VXNlcjExMTU3ODEx", "organizations_url": "https://api.github.com/users/dvsth/orgs", "received_events_url": "https://api.github.com/users/dvsth/received_events", "repos_url": "https://api.github.com/users/dvsth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvsth/subscriptions", "type": "User", "url": "https://api.github.com/users/dvsth" }
https://api.github.com/repos/huggingface/datasets/issues/4594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4594/timeline
closed
false
4,594
null
2022-06-29T04:03:44Z
null
false
1,288,067,699
https://api.github.com/repos/huggingface/datasets/issues/4593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4593/events
[]
null
2022-06-29T04:01:59Z
[]
https://github.com/huggingface/datasets/pull/4593
NONE
null
false
null
[]
Fix error message when using load_from_disk to load DatasetDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4593/reactions" }
PR_kwDODunzps46iIkn
{ "diff_url": "https://github.com/huggingface/datasets/pull/4593.diff", "html_url": "https://github.com/huggingface/datasets/pull/4593", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4593.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4593" }
2022-06-29T01:34:27Z
https://api.github.com/repos/huggingface/datasets/issues/4593/comments
Issue #4594 Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error. Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`. Changes: Change the suggestion to say "Please use `datasets.dataset_dict.load_from_disk` instead."
{ "avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4", "events_url": "https://api.github.com/users/dvsth/events{/privacy}", "followers_url": "https://api.github.com/users/dvsth/followers", "following_url": "https://api.github.com/users/dvsth/following{/other_user}", "gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dvsth", "id": 11157811, "login": "dvsth", "node_id": "MDQ6VXNlcjExMTU3ODEx", "organizations_url": "https://api.github.com/users/dvsth/orgs", "received_events_url": "https://api.github.com/users/dvsth/received_events", "repos_url": "https://api.github.com/users/dvsth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvsth/subscriptions", "type": "User", "url": "https://api.github.com/users/dvsth" }
https://api.github.com/repos/huggingface/datasets/issues/4593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4593/timeline
closed
false
4,593
null
2022-06-29T04:01:39Z
null
true
1,288,029,377
https://api.github.com/repos/huggingface/datasets/issues/4592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4592/events
[]
null
2022-06-29T10:30:03Z
[]
https://github.com/huggingface/datasets/issues/4592
NONE
completed
null
null
[ "Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/repositories-pull-requests-discussions\r\n\r\nThe Discussion tab for your \"jalFaizy/detect_chess_pieces\" dataset is here: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions\r\nYou can use it to ask for help by pinging the Datasets maintainers: see our docs here: https://huggingface.co/docs/datasets/master/en/share#ask-for-a-help-and-reviews\r\n\r\nI'm transferring this discussion to your Discussion tab and trying to address it: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/1", "Thank you @albertvillanova , I will keep that in mind.\r\n\r\nJust a quick note - I posted the issue on Github because the dataset viewer suggested me to \"open an issue for direct support\". Maybe it can be updated with your suggestion\r\n\r\n![image](https://user-images.githubusercontent.com/8406903/176397633-7b077d81-2044-4487-b58e-6346b05be5cf.png)\r\n\r\n\r\n", "Thank you pointing this out: yes, definitely, we should fix the error message. We are working on this." ]
Issue with jalFaizy/detect_chess_pieces when running datasets-cli test
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4592/reactions" }
I_kwDODunzps5MxcTB
null
2022-06-29T00:15:54Z
https://api.github.com/repos/huggingface/datasets/issues/4592/comments
### Link https://huggingface.co/datasets/jalFaizy/detect_chess_pieces ### Description I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) When I run the command `$ datasets-cli test "D:\workspace\HF\detect_chess_pieces" --save_infos --all_configs` It gives the following error ``` Using custom data configuration default Traceback (most recent call last): File "c:\users\faiza\anaconda3\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\faiza\anaconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\faiza\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 7, in <module> File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\datasets_cli.py", line 39, in main service.run() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 132, in run for j, builder in enumerate(get_builders()): File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 125, in get_builders yield builder_cls( File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 1148, in __init__ super().__init__(*args, **kwargs) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 306, in __init__ info = self.get_exported_dataset_info() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 405, in get_exported_dataset_info return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 390, in get_all_exported_dataset_infos return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 309, in from_directory dataset_infos_dict = { File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 310, in <dictcomp> config_name: DatasetInfo.from_dict(dataset_info_dict) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 272, in from_dict return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names}) File "<string>", line 20, in __init__ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 160, in __post_init__ templates = [ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 161, in <listcomp> template if isinstance(template, TaskTemplate) else task_template_from_dict(template) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\tasks\__init__.py", line 43, in task_template_from_dict return template.from_dict(task_template_dict) AttributeError: 'NoneType' object has no attribute 'from_dict' ``` My assumption is that there is some kind of issue in how the "task_templates" are read, because even if I keep them as None, or not include the argument at all, the same error occurs ### Owner Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4", "events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}", "followers_url": "https://api.github.com/users/faizankshaikh/followers", "following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}", "gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/faizankshaikh", "id": 8406903, "login": "faizankshaikh", "node_id": "MDQ6VXNlcjg0MDY5MDM=", "organizations_url": "https://api.github.com/users/faizankshaikh/orgs", "received_events_url": "https://api.github.com/users/faizankshaikh/received_events", "repos_url": "https://api.github.com/users/faizankshaikh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions", "type": "User", "url": "https://api.github.com/users/faizankshaikh" }
https://api.github.com/repos/huggingface/datasets/issues/4592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4592/timeline
closed
false
4,592
null
2022-06-29T07:49:27Z
null
false
1,288,021,332
https://api.github.com/repos/huggingface/datasets/issues/4591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4591/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-08T12:01:36Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
https://github.com/huggingface/datasets/issues/4591
CONTRIBUTOR
completed
null
null
[ "Hi, thanks for reporting! This issue stems from the changes introduced in https://github.com/huggingface/datasets/pull/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but doesn't change the types, hence the failure." ]
Can't push Images to hub with manual Dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4591/reactions" }
I_kwDODunzps5MxaVU
null
2022-06-29T00:01:23Z
https://api.github.com/repos/huggingface/datasets/issues/4591/comments
## Describe the bug If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed, instead it looks for image where image local path is/used to be. This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated. This happens even though the dataset is looking like decoded images: ![image](https://user-images.githubusercontent.com/15624271/176322689-2cc819cf-9d5c-4a8f-9f3d-83ae8ec06f20.png) and I use `embed_external_files=True` while `push_to_hub` (same with false) ## Steps to reproduce the bug ```python from PIL import Image from datasets import Image as ImageFeature from datasets import Features,Dataset #manually create dataset feats=Features( { "images": [ImageFeature()], #same even if explicitly ImageFeature(decode=True) "input_image": ImageFeature(), } ) test_data={"images":[[Image.open("test.jpg"),Image.open("test.jpg"),Image.open("test.jpg")]], "input_image":[Image.open("test.jpg")]} test_dataset=Dataset.from_dict(test_data,features=feats) print(test_dataset) test_dataset.push_to_hub("ceyda/image_test_public",private=False,token="",embed_external_files=True) # clear cache rm -r ~/.cache/huggingface # remove "test.jpg" # remove to see that it is looking for image on the local path test_dataset=load_dataset("ceyda/image_test_public",use_auth_token="") print(test_dataset) print(test_dataset['train'][0]) ``` ## Expected results should be able to push image bytes if dataset has `Image(decode=True)` ## Actual results errors because it is trying to decode file from the non existing local path. ``` ----> print(test_dataset['train'][0]) File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key) 2152 def __getitem__(self, key): # noqa: F811 2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2154 return self._getitem( 2155 key, 2156 ) File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs) 2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2139 formatted_output = format_table( 2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2141 ) 2142 return formatted_output File ~/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: ... -> 3068 fp = builtins.open(filename, "rb") 3069 exclusive_fp = True 3071 try: FileNotFoundError: [Errno 2] No such file or directory: 'test.jpg' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda" }
https://api.github.com/repos/huggingface/datasets/issues/4591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4591/timeline
closed
false
4,591
null
2022-07-08T12:01:35Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
false
1,287,941,058
https://api.github.com/repos/huggingface/datasets/issues/4590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4590/events
[]
null
2022-07-08T14:55:13Z
[]
https://github.com/huggingface/datasets/pull/4590
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova, Can you please review this PR for Issue #4540 ", "@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.", "Hi ! Sure feel free to join our discord ^^ \r\nhttps://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)" ]
Generalize meta_path json file creation in load.py [#4540]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4590/reactions" }
PR_kwDODunzps46htv0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4590.diff", "html_url": "https://github.com/huggingface/datasets/pull/4590", "merged_at": "2022-07-07T13:17:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4590.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4590" }
2022-06-28T21:48:06Z
https://api.github.com/repos/huggingface/datasets/issues/4590/comments
# What does this PR do? ## Summary *In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.* ## Additions - ## Changes - Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code. ## Deletions - ## Issues Addressed : Fixes #4540
{ "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath" }
https://api.github.com/repos/huggingface/datasets/issues/4590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4590/timeline
closed
false
4,590
null
2022-07-07T13:17:45Z
null
true
1,287,600,029
https://api.github.com/repos/huggingface/datasets/issues/4589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4589/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-06-29T06:26:28Z
[]
https://github.com/huggingface/datasets/issues/4589
NONE
completed
null
null
[]
Permission denied: '/home/.cache' when load_dataset with local script
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4589/reactions" }
I_kwDODunzps5Mvzed
null
2022-06-28T16:26:03Z
https://api.github.com/repos/huggingface/datasets/issues/4589/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/24559732?v=4", "events_url": "https://api.github.com/users/jiangh0/events{/privacy}", "followers_url": "https://api.github.com/users/jiangh0/followers", "following_url": "https://api.github.com/users/jiangh0/following{/other_user}", "gists_url": "https://api.github.com/users/jiangh0/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangh0", "id": 24559732, "login": "jiangh0", "node_id": "MDQ6VXNlcjI0NTU5NzMy", "organizations_url": "https://api.github.com/users/jiangh0/orgs", "received_events_url": "https://api.github.com/users/jiangh0/received_events", "repos_url": "https://api.github.com/users/jiangh0/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangh0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangh0/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangh0" }
https://api.github.com/repos/huggingface/datasets/issues/4589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4589/timeline
closed
false
4,589
null
2022-06-29T06:25:08Z
null
false
1,287,368,751
https://api.github.com/repos/huggingface/datasets/issues/4588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4588/events
[]
null
2022-07-05T16:01:15Z
[]
https://github.com/huggingface/datasets/pull/4588
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks 🙏 ", "@younesbelkada we have just merged this PR." ]
Host head_qa data on the Hub and fix NonMatchingChecksumError
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4588/reactions" }
PR_kwDODunzps46f2kF
{ "diff_url": "https://github.com/huggingface/datasets/pull/4588.diff", "html_url": "https://github.com/huggingface/datasets/pull/4588", "merged_at": "2022-07-05T15:49:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/4588.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4588" }
2022-06-28T13:39:28Z
https://api.github.com/repos/huggingface/datasets/issues/4588/comments
This PR: - Hosts head_qa data on the Hub instead of Google Drive - Fixes NonMatchingChecksumError Fix https://huggingface.co/datasets/head_qa/discussions/1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4588/timeline
closed
false
4,588
null
2022-07-05T15:49:52Z
null
true
1,287,291,494
https://api.github.com/repos/huggingface/datasets/issues/4587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4587/events
[]
null
2022-06-28T14:11:57Z
[]
https://github.com/huggingface/datasets/pull/4587
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Validate new_fingerprint passed by user
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4587/reactions" }
PR_kwDODunzps46flzR
{ "diff_url": "https://github.com/huggingface/datasets/pull/4587.diff", "html_url": "https://github.com/huggingface/datasets/pull/4587", "merged_at": "2022-06-28T14:00:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4587.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4587" }
2022-06-28T12:46:21Z
https://api.github.com/repos/huggingface/datasets/issues/4587/comments
Users can pass the dataset fingerprint they want in `map` and other dataset transforms. However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4587/timeline
closed
false
4,587
null
2022-06-28T14:00:44Z
null
true
1,287,105,636
https://api.github.com/repos/huggingface/datasets/issues/4586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4586/events
[]
null
2022-06-28T14:52:56Z
[]
https://github.com/huggingface/datasets/pull/4586
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Host pn_summary data on the Hub instead of Google Drive
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4586/reactions" }
PR_kwDODunzps46e9xB
{ "diff_url": "https://github.com/huggingface/datasets/pull/4586.diff", "html_url": "https://github.com/huggingface/datasets/pull/4586", "merged_at": "2022-06-28T14:42:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/4586.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4586" }
2022-06-28T10:05:05Z
https://api.github.com/repos/huggingface/datasets/issues/4586/comments
Fix #4581.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4586/timeline
closed
false
4,586
null
2022-06-28T14:42:03Z
null
true
1,287,064,929
https://api.github.com/repos/huggingface/datasets/issues/4585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4585/events
[]
null
2022-06-28T14:19:35Z
[]
https://github.com/huggingface/datasets/pull/4585
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Host multi_news data on the Hub instead of Google Drive
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4585/reactions" }
PR_kwDODunzps46e1Ne
{ "diff_url": "https://github.com/huggingface/datasets/pull/4585.diff", "html_url": "https://github.com/huggingface/datasets/pull/4585", "merged_at": "2022-06-28T14:08:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/4585.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4585" }
2022-06-28T09:32:06Z
https://api.github.com/repos/huggingface/datasets/issues/4585/comments
Host data files of multi_news dataset on the Hub. They were on Google Drive. Fix #4580.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4585/timeline
closed
false
4,585
null
2022-06-28T14:08:48Z
null
true
1,286,911,993
https://api.github.com/repos/huggingface/datasets/issues/4584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4584/events
[]
null
2023-09-24T10:04:04Z
[]
https://github.com/huggingface/datasets/pull/4584
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.", "> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https://github.com/huggingface/hub-docs/pull/217", "I don't think we need to update this file anymore. We should remove it IMO, and simply update the dataset [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging)", "I'm closing this PR." ]
Add binary classification task IDs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4584/reactions" }
PR_kwDODunzps46eVF7
{ "diff_url": "https://github.com/huggingface/datasets/pull/4584.diff", "html_url": "https://github.com/huggingface/datasets/pull/4584", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4584.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4584" }
2022-06-28T07:30:39Z
https://api.github.com/repos/huggingface/datasets/issues/4584/comments
As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification. This PR adds binary classification to the task IDs to enable this. Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597 cc @abhishekkrthakur @SBrandeis
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4584/timeline
closed
false
4,584
null
2023-01-26T09:27:52Z
null
true
1,286,790,871
https://api.github.com/repos/huggingface/datasets/issues/4583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4583/events
[]
null
2022-06-28T05:47:02Z
[]
https://github.com/huggingface/datasets/pull/4583
NONE
null
false
null
[]
<code> implementation of FLAC support using torchaudio
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4583/reactions" }
PR_kwDODunzps46d7xo
{ "diff_url": "https://github.com/huggingface/datasets/pull/4583.diff", "html_url": "https://github.com/huggingface/datasets/pull/4583", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4583.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4583" }
2022-06-28T05:24:21Z
https://api.github.com/repos/huggingface/datasets/issues/4583/comments
I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/
{ "avatar_url": "https://avatars.githubusercontent.com/u/45745870?v=4", "events_url": "https://api.github.com/users/rafael-ariascalles/events{/privacy}", "followers_url": "https://api.github.com/users/rafael-ariascalles/followers", "following_url": "https://api.github.com/users/rafael-ariascalles/following{/other_user}", "gists_url": "https://api.github.com/users/rafael-ariascalles/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rafael-ariascalles", "id": 45745870, "login": "rafael-ariascalles", "node_id": "MDQ6VXNlcjQ1NzQ1ODcw", "organizations_url": "https://api.github.com/users/rafael-ariascalles/orgs", "received_events_url": "https://api.github.com/users/rafael-ariascalles/received_events", "repos_url": "https://api.github.com/users/rafael-ariascalles/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rafael-ariascalles/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafael-ariascalles/subscriptions", "type": "User", "url": "https://api.github.com/users/rafael-ariascalles" }
https://api.github.com/repos/huggingface/datasets/issues/4583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4583/timeline
closed
false
4,583
null
2022-06-28T05:47:02Z
null
true
1,286,517,060
https://api.github.com/repos/huggingface/datasets/issues/4582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4582/events
[]
null
2022-07-06T15:19:54Z
[]
https://github.com/huggingface/datasets/pull/4582
CONTRIBUTOR
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4582). All of your documentation changes will be reflected on that endpoint." ]
add_column should preserve _indexes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4582/reactions" }
PR_kwDODunzps46dC59
{ "diff_url": "https://github.com/huggingface/datasets/pull/4582.diff", "html_url": "https://github.com/huggingface/datasets/pull/4582", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4582.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4582" }
2022-06-27T22:35:47Z
https://api.github.com/repos/huggingface/datasets/issues/4582/comments
https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126 doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case. This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init. with this PR now can pass 'indexes' on init through `IndexableMixin` - [x] Added test
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda" }
https://api.github.com/repos/huggingface/datasets/issues/4582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4582/timeline
open
false
4,582
null
null
null
true
1,286,362,907
https://api.github.com/repos/huggingface/datasets/issues/4581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4581/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-28T14:42:03Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4581
MEMBER
completed
null
null
[ "linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?", "Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n", "Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else." ]
Dataset Viewer issue for pn_summary
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4581/reactions" }
I_kwDODunzps5MrFcb
null
2022-06-27T20:56:12Z
https://api.github.com/repos/huggingface/datasets/issues/4581/comments
### Link https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation ### Description Getting an index error on the `validation` and `test` splits: ``` Server error Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4581/timeline
closed
false
4,581
null
2022-06-28T14:42:03Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,286,312,912
https://api.github.com/repos/huggingface/datasets/issues/4580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4580/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-28T14:08:48Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4580
MEMBER
completed
null
null
[ "Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.", "I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt" ]
Dataset Viewer issue for multi_news
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4580/reactions" }
I_kwDODunzps5Mq5PQ
null
2022-06-27T20:25:25Z
https://api.github.com/repos/huggingface/datasets/issues/4580/comments
### Link https://huggingface.co/datasets/multi_news ### Description Not sure what the index error is referring to here: ``` Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4580/timeline
closed
false
4,580
null
2022-06-28T14:08:48Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,286,106,285
https://api.github.com/repos/huggingface/datasets/issues/4579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4579/events
[]
null
2022-07-04T19:35:01Z
[]
https://github.com/huggingface/datasets/pull/4579
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either yield from buffer\r\n - or iterate over samples and either yield or buffer the sample\r\n \r\n The speed gain obviously depends on how the indexes are sorted in the split file:\r\n - Best case: indices are [1, 2, 3]\r\n - Worst case (no speed gain): indices are [3, 1, 2] or [3, 2, 1]\r\n\r\nLet me know what you think.", "I have to update the dummy data so that it aligns with the real data (inside the archive, the samples file `dataset.json` is the last member).", "There is an issue when testing `test_load_dataset_cfq` with dummy data:\r\n- `MockDownloadManager.iter_archive` yields FIRST `'cfq/dataset.json'`\r\n- [`Streaming`]`DownloadManager.iter_archive` yields LAST `'cfq/dataset.json'` when using real data tar.gz archive\r\n\r\nNote that this issue arises only with dummy data: loading the real dataset works smoothly for all configurations: I recreated the `dataset_infos.json` file to check it (it generated the same file).", "This PR should be merged first:\r\n- #4611", "Impressive, thank you ! :o \r\n\r\nfeel free to merge master into this branch, now that the files order is respected. You can merge if the CI is green :)" ]
Support streaming cfq dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4579/reactions" }
PR_kwDODunzps46bo2h
{ "diff_url": "https://github.com/huggingface/datasets/pull/4579.diff", "html_url": "https://github.com/huggingface/datasets/pull/4579", "merged_at": "2022-07-04T19:23:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/4579.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4579" }
2022-06-27T17:11:23Z
https://api.github.com/repos/huggingface/datasets/issues/4579/comments
Support streaming cfq dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4579/timeline
closed
false
4,579
null
2022-07-04T19:23:57Z
null
true
1,286,086,400
https://api.github.com/repos/huggingface/datasets/issues/4578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4578/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2023-06-14T15:43:05Z
[]
https://github.com/huggingface/datasets/issues/4578
MEMBER
null
null
null
[ "I want to be able to create folders in a model.", "How to set new split names, instead of train/test/validation? For example, I have a local dataset, consists of several subsets, named \"A\", \"B\", and \"C\". How can I create a huggingface dataset, with splits A/B/C ?\r\n\r\nThe document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?", "> The document in https://huggingface.co/docs/datasets/dataset_script only tells me how to create datasets with subsets that is hosted on another server. How to do it if my datasets are local?\r\n\r\nIt works the same - you just need to use local paths instead of URLs" ]
[Multi Configs] Use directories to differentiate between subsets/configurations
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 5, "heart": 9, "hooray": 0, "laugh": 0, "rocket": 5, "total_count": 19, "url": "https://api.github.com/repos/huggingface/datasets/issues/4578/reactions" }
I_kwDODunzps5MqB8A
null
2022-06-27T16:55:11Z
https://api.github.com/repos/huggingface/datasets/issues/4578/comments
Currently to define several subsets/configurations of your dataset, you need to use a dataset script. However it would be nice to have a no-code way to to this. For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration. These structures are not supported right now, but would be nice to have: ``` my_dataset_repository/ ├── README.md ├── en/ │ ├── train.csv │ └── test.csv └── fr/ ├── train.csv └── test.csv ``` Or with one directory per split: ``` my_dataset_repository/ ├── README.md ├── en/ │ ├── train/ │ │ ├── shard_0.csv │ │ └── shard_1.csv │ └── test/ │ ├── shard_0.csv │ └── shard_1.csv └── fr/ ├── train/ │ ├── shard_0.csv │ └── shard_1.csv └── test/ ├── shard_0.csv └── shard_1.csv ``` cc @stevhliu @albertvillanova This can be specified in the README as YAML with ``` configs: - config_name: en data_dir: en - config_name: fr data_dir: fr ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4578/timeline
open
false
4,578
null
null
null
false
1,285,703,775
https://api.github.com/repos/huggingface/datasets/issues/4577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4577/events
[]
null
2022-07-04T13:13:15Z
[]
https://github.com/huggingface/datasets/pull/4577
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Add authentication tip to `load_dataset`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4577/reactions" }
PR_kwDODunzps46aTWL
{ "diff_url": "https://github.com/huggingface/datasets/pull/4577.diff", "html_url": "https://github.com/huggingface/datasets/pull/4577", "merged_at": "2022-07-04T13:01:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4577.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4577" }
2022-06-27T12:05:34Z
https://api.github.com/repos/huggingface/datasets/issues/4577/comments
Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/4577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4577/timeline
closed
false
4,577
null
2022-07-04T13:01:30Z
null
true
1,285,698,576
https://api.github.com/repos/huggingface/datasets/issues/4576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4576/events
[]
null
2022-07-01T12:44:55Z
[]
https://github.com/huggingface/datasets/pull/4576
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?", "Yes, that's indeed the problem. My solution in https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https://github.com/huggingface/datasets/blob/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95/src/datasets/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?", "@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n", "The CI still struggles but you can merge since at least one of the two WIN CI succeeded" ]
Include `metadata.jsonl` in resolved data files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4576/reactions" }
PR_kwDODunzps46aSN_
{ "diff_url": "https://github.com/huggingface/datasets/pull/4576.diff", "html_url": "https://github.com/huggingface/datasets/pull/4576", "merged_at": "2022-06-30T10:15:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4576.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4576" }
2022-06-27T12:01:29Z
https://api.github.com/repos/huggingface/datasets/issues/4576/comments
Include `metadata.jsonl` in resolved data files. Fix #4548 @lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/4576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4576/timeline
closed
false
4,576
null
2022-06-30T10:15:32Z
null
true
1,285,446,700
https://api.github.com/repos/huggingface/datasets/issues/4575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4575/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-08-23T10:01:02Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/4575
NONE
completed
null
null
[ "Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`.", "@albertvillanova @lhoestq Could you take a look at this issue?", "@winterfell2021 Hi, I wonder where the code you provided should be added. I tried to add them in the `datasets/table.py` in `array_cast` function, however, the 'zh' item is none.", "I found some 'zh' item is none while 'c[hn]' is not.\r\nSo the code may change to:\r\n```python\r\nif 'c[hn]' in str(array.type):\r\n py_array = array.to_pylist()\r\n data_list = []\r\n for vo in py_array:\r\n tmp = {\r\n 'en': vo['en'],\r\n }\r\n if vo.get('zh'):\r\n tmp['zh'] = vo['zh']\r\n else:\r\n tmp['zh'] = vo['c[hn]']\r\n data_list.append(tmp)\r\n array = pa.array(data_list, type=pa.struct([\r\n pa.field('en', pa.string()),\r\n pa.field('zh', pa.string()),\r\n ]))\r\n```", "I just pushed a fix, we'll do a new release of `datasets` soon to include this fix. In the meantime you can use the fixed dataset by passing `revision=\"main\"` to `load_dataset`" ]
Problem about wmt17 zh-en dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4575/reactions" }
I_kwDODunzps5Mnlws
null
2022-06-27T08:35:42Z
https://api.github.com/repos/huggingface/datasets/issues/4575/comments
It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`. So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception: ``` Traceback (most recent call last): File "train.py", line 78, in <module> data = load_dataset(args.dataset, "zh-en") File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1684, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1221, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1215, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 533, in finalize self.write_examples_on_file() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 410, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1846, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1756, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<c[hn]: string, en: string, zh: string> to struct<en: string, zh: string> ``` So the solution of this problem is to change the original array manually: ``` if 'c[hn]' in str(array.type): py_array = array.to_pylist() data_list = [] for vo in py_array: tmp = { 'en': vo['en'], } if 'zh' not in vo: tmp['zh'] = vo['c[hn]'] else: tmp['zh'] = vo['zh'] data_list.append(tmp) array = pa.array(data_list, type=pa.struct([ pa.field('en', pa.string()), pa.field('zh', pa.string()), ])) ``` Therefore, maybe a correct version of original casia2015 file need to be updated
{ "avatar_url": "https://avatars.githubusercontent.com/u/85819194?v=4", "events_url": "https://api.github.com/users/winterfell2021/events{/privacy}", "followers_url": "https://api.github.com/users/winterfell2021/followers", "following_url": "https://api.github.com/users/winterfell2021/following{/other_user}", "gists_url": "https://api.github.com/users/winterfell2021/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/winterfell2021", "id": 85819194, "login": "winterfell2021", "node_id": "MDQ6VXNlcjg1ODE5MTk0", "organizations_url": "https://api.github.com/users/winterfell2021/orgs", "received_events_url": "https://api.github.com/users/winterfell2021/received_events", "repos_url": "https://api.github.com/users/winterfell2021/repos", "site_admin": false, "starred_url": "https://api.github.com/users/winterfell2021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/winterfell2021/subscriptions", "type": "User", "url": "https://api.github.com/users/winterfell2021" }
https://api.github.com/repos/huggingface/datasets/issues/4575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4575/timeline
closed
false
4,575
null
2022-08-23T10:00:21Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
1,285,380,616
https://api.github.com/repos/huggingface/datasets/issues/4574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4574/events
[]
null
2022-07-21T13:37:30Z
[]
https://github.com/huggingface/datasets/pull/4574
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/datasets/tests/conftest.py'.\r\ntests/conftest.py:13: in <module>\r\n import datasets\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>\r\n from .arrow_dataset import Dataset\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_dataset.py:62: in <module>\r\n from .arrow_reader import ArrowReader\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_reader.py:29: in <module>\r\n from .download.download_config import DownloadConfig\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/__init__.py:10: in <module>\r\n from .streaming_download_manager import StreamingDownloadManager\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/streaming_download_manager.py:20: in <module>\r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/__init__.py:13: in <module>\r\n from .s3filesystem import S3FileSystem # noqa: F401\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py:1: in <module>\r\n import s3fs\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/__init__.py:1: in <module>\r\n from .core import S3FileSystem, S3File\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/core.py:12: in <module>\r\n from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync\r\nE ImportError: cannot import name 'maybe_sync'\r\n```\r\n\r\nThe installed `s3fs` version is too old. What about pinning a min version?", "Maybe you can try setting the same minimum version as fsspec ? `s3fs>=2021.11.1`", "Yes, I have checked that they both require to have the same version. \r\n\r\nThe issue then was coming from aiobotocore, boto3, botocore. I have changed them from strict to min version requirements.\r\n> s3fs 2021.11.1 depends on aiobotocore~=2.0.1", "I have updated all min versions so that they are compatible one with each other. I'm pushing again...", "Thanks !", "Nice!" ]
Support streaming mlsum dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4574/reactions" }
PR_kwDODunzps46ZOpZ
{ "diff_url": "https://github.com/huggingface/datasets/pull/4574.diff", "html_url": "https://github.com/huggingface/datasets/pull/4574", "merged_at": "2022-07-21T12:40:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/4574.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4574" }
2022-06-27T07:37:03Z
https://api.github.com/repos/huggingface/datasets/issues/4574/comments
Support streaming mlsum dataset. This PR: - pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1` - https://github.com/fsspec/filesystem_spec/pull/830 - unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1` > s3fs 2021.8.1 requires fsspec==2021.08.1 - see discussion: https://github.com/huggingface/datasets/pull/2858/files#r700027326 - updates the following requirements to be compatible with the previous ones and one with each other: - `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1) - `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1) - `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3) Fix #4572.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4574/timeline
closed
false
4,574
null
2022-07-21T12:40:00Z
null
true
1,285,023,629
https://api.github.com/repos/huggingface/datasets/issues/4573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4573/events
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
null
2023-09-24T09:35:07Z
[]
https://github.com/huggingface/datasets/pull/4573
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
Fix evaluation metadata for ncbi_disease
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4573/reactions" }
PR_kwDODunzps46YEEa
{ "diff_url": "https://github.com/huggingface/datasets/pull/4573.diff", "html_url": "https://github.com/huggingface/datasets/pull/4573", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4573" }
2022-06-26T20:29:32Z
https://api.github.com/repos/huggingface/datasets/issues/4573/comments
This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4573/timeline
closed
false
4,573
null
2022-09-23T09:38:02Z
null
true
1,285,022,499
https://api.github.com/repos/huggingface/datasets/issues/4572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4572/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-07-21T12:40:01Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4572
MEMBER
completed
null
null
[ "Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..." ]
Dataset Viewer issue for mlsum
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions" }
I_kwDODunzps5Ml-Mj
null
2022-06-26T20:24:17Z
https://api.github.com/repos/huggingface/datasets/issues/4572/comments
### Link https://huggingface.co/datasets/mlsum/viewer/de/train ### Description There's seems to be a problem with the download / streaming of this dataset: ``` Server error Status code: 400 Exception: BadZipFile Message: File is not a zip file ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4572/timeline
closed
false
4,572
null
2022-07-21T12:40:01Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,284,883,289
https://api.github.com/repos/huggingface/datasets/issues/4571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4571/events
[]
null
2023-09-25T12:05:18Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4571
MEMBER
null
null
null
[ "Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ", "I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?", "fwiw: the dataset viewer is working. Renaming the issue" ]
move under the facebook org?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4571/reactions" }
I_kwDODunzps5MlcNZ
null
2022-06-26T11:19:09Z
https://api.github.com/repos/huggingface/datasets/issues/4571/comments
### Link https://huggingface.co/datasets/gsarti/flores_101 ### Description It seems like streaming isn't supported for this dataset: ``` Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4571/timeline
open
false
4,571
null
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,284,846,168
https://api.github.com/repos/huggingface/datasets/issues/4570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4570/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-06-30T11:00:47Z
[]
https://github.com/huggingface/datasets/issues/4570
CONTRIBUTOR
completed
null
null
[ "This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.", "Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread 😄 ", "Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ", "@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ", "This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)." ]
Dataset sharding non-contiguous?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4570/reactions" }
I_kwDODunzps5MlTJY
null
2022-06-26T08:34:05Z
https://api.github.com/repos/huggingface/datasets/issues/4570/comments
## Describe the bug I'm not sure if this is a bug; more likely normal behavior but i wanted to double check. Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset? This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made. ## Steps to reproduce the bug ```python max_shard_size = convert_file_size_to_int('300MB') dataset_nbytes = dataset.data.nbytes num_shards = int(dataset_nbytes / max_shard_size) + 1 num_shards = max(num_shards, 1) print(f"{num_shards=}") for shard_index in range(num_shards): shard = dataset.shard(num_shards=num_shards, index=shard_index) shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet") os.listdir('tokenized/') ``` ## Expected results I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example ## Actual results Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki" }
https://api.github.com/repos/huggingface/datasets/issues/4570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4570/timeline
closed
false
4,570
null
2022-06-26T14:36:20Z
null
false
1,284,833,694
https://api.github.com/repos/huggingface/datasets/issues/4569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4569/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-27T06:37:48Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4569
MEMBER
completed
null
null
[ "Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ", "Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)" ]
Dataset Viewer issue for sst2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4569/reactions" }
I_kwDODunzps5MlQGe
null
2022-06-26T07:32:54Z
https://api.github.com/repos/huggingface/datasets/issues/4569/comments
### Link https://huggingface.co/datasets/sst2 ### Description Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem): ``` Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4569/timeline
closed
false
4,569
null
2022-06-27T06:37:48Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,284,655,624
https://api.github.com/repos/huggingface/datasets/issues/4568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4568/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-07-04T14:29:40Z
[]
https://github.com/huggingface/datasets/issues/4568
CONTRIBUTOR
completed
null
null
[ "Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ", "Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.", "Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)." ]
XNLI cache reload is very slow
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4568/reactions" }
I_kwDODunzps5MkkoI
null
2022-06-25T16:43:56Z
https://api.github.com/repos/huggingface/datasets/issues/4568/comments
### Reproduce Using `2.3.3.dev0` `from datasets import load_dataset` `load_dataset("xnli", "en")` Turn off Internet `load_dataset("xnli", "en")` I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet. ``` --------------------------------------------------------------------------- gaierror Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) /opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 71 ---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 73 af, socktype, proto, canonname, sa = res /opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags) 751 addrlist = [] --> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags): 753 af, socktype, proto, canonname, sa = res gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: KeyboardInterrupt Traceback (most recent call last) /tmp/ipykernel_33/3594208039.py in <module> ----> 1 load_dataset("xnli", "en") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1673 revision=revision, 1674 use_auth_token=use_auth_token, -> 1675 **config_kwargs, 1676 ) 1677 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1494 download_mode=download_mode, 1495 data_dir=data_dir, -> 1496 data_files=data_files, 1497 ) 1498 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1182 download_config=download_config, 1183 download_mode=download_mode, -> 1184 dynamic_modules_path=dynamic_modules_path, 1185 ).get_module() 1186 elif path.count("/") == 1: # community dataset on the Hub /opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path) 506 self.dynamic_modules_path = dynamic_modules_path 507 assert self.name.count("/") == 0 --> 508 increase_load_count(name, resource_type="dataset") 509 510 def download_loading_script(self, revision: Optional[str]) -> str: /opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type) 166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS: 167 try: --> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset")) 169 except Exception: 170 pass /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries) 93 return http_head( 94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset), ---> 95 max_retries=max_retries, 96 ) 97 /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 445 allow_redirects=allow_redirects, 446 timeout=timeout, --> 447 max_retries=max_retries, 448 ) 449 return response /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 366 tries += 1 367 try: --> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 369 success = True 370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 /opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 527 } 528 send_kwargs.update(settings) --> 529 resp = self.send(prep, **send_kwargs) 530 531 return resp /opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs) 643 644 # Send the request --> 645 r = adapter.send(request, **kwargs) 646 647 # Total elapsed time of the request (approximately) /opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 decode_content=False, 449 retries=self.max_retries, --> 450 timeout=timeout 451 ) 452 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 708 body=body, 709 headers=headers, --> 710 chunked=chunked, 711 ) 712 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 384 # Trigger any extra validation we need to do. 385 try: --> 386 self._validate_conn(conn) 387 except (SocketTimeout, BaseSSLError) as e: 388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 1038 # Force connect early to allow us to validate the connection. 1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` -> 1040 conn.connect() 1041 1042 if not conn.is_verified: /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self) 356 def connect(self): 357 # Add certificate verification --> 358 self.sock = conn = self._new_conn() 359 hostname = self.host 360 tls_in_tls = False /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 173 try: 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) 177 KeyboardInterrupt: ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Muennighoff", "id": 62820084, "login": "Muennighoff", "node_id": "MDQ6VXNlcjYyODIwMDg0", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "repos_url": "https://api.github.com/users/Muennighoff/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "type": "User", "url": "https://api.github.com/users/Muennighoff" }
https://api.github.com/repos/huggingface/datasets/issues/4568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4568/timeline
closed
false
4,568
null
2022-07-04T14:29:40Z
null
false
1,284,528,474
https://api.github.com/repos/huggingface/datasets/issues/4567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4567/events
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
null
2023-09-24T09:35:22Z
[]
https://github.com/huggingface/datasets/pull/4567
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
Add evaluation data for amazon_reviews_multi
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4567/reactions" }
PR_kwDODunzps46Wh0-
{ "diff_url": "https://github.com/huggingface/datasets/pull/4567.diff", "html_url": "https://github.com/huggingface/datasets/pull/4567", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4567.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4567" }
2022-06-25T09:40:52Z
https://api.github.com/repos/huggingface/datasets/issues/4567/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4567/timeline
closed
false
4,567
null
2022-09-23T09:37:23Z
null
true
1,284,397,594
https://api.github.com/repos/huggingface/datasets/issues/4566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4566/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2023-01-24T16:33:40Z
[]
https://github.com/huggingface/datasets/issues/4566
NONE
completed
null
null
[ "Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?", "https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works." ]
Document link #load_dataset_enhancing_performance points to nowhere
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4566/reactions" }
I_kwDODunzps5Mjloa
null
2022-06-25T01:18:19Z
https://api.github.com/repos/huggingface/datasets/issues/4566/comments
## Describe the bug A clear and concise description of what the bug is. ![image](https://user-images.githubusercontent.com/11674033/175752806-5b066b92-9d28-4771-9112-5c8606f07741.png) The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
{ "avatar_url": "https://avatars.githubusercontent.com/u/11674033?v=4", "events_url": "https://api.github.com/users/subercui/events{/privacy}", "followers_url": "https://api.github.com/users/subercui/followers", "following_url": "https://api.github.com/users/subercui/following{/other_user}", "gists_url": "https://api.github.com/users/subercui/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/subercui", "id": 11674033, "login": "subercui", "node_id": "MDQ6VXNlcjExNjc0MDMz", "organizations_url": "https://api.github.com/users/subercui/orgs", "received_events_url": "https://api.github.com/users/subercui/received_events", "repos_url": "https://api.github.com/users/subercui/repos", "site_admin": false, "starred_url": "https://api.github.com/users/subercui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subercui/subscriptions", "type": "User", "url": "https://api.github.com/users/subercui" }
https://api.github.com/repos/huggingface/datasets/issues/4566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4566/timeline
closed
false
4,566
null
2023-01-24T16:33:40Z
null
false
1,284,141,666
https://api.github.com/repos/huggingface/datasets/issues/4565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4565/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2022-07-06T19:03:02Z
[]
https://github.com/huggingface/datasets/issues/4565
NONE
completed
null
null
[ "I will add this directly on the hub (same as #4486)—in https://huggingface.co/lapix" ]
Add UFSC OCPap dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4565/reactions" }
I_kwDODunzps5MinJi
null
2022-06-24T20:07:54Z
https://api.github.com/repos/huggingface/datasets/issues/4565/comments
## Adding a Dataset - **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4) - **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients. - **Paper:** https://dx.doi.org/10.2139/ssrn.4119212 - **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1 - **Motivation:** real data of pap stained oral cytology samples Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/johnnv1", "id": 20444345, "login": "johnnv1", "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "repos_url": "https://api.github.com/users/johnnv1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "type": "User", "url": "https://api.github.com/users/johnnv1" }
https://api.github.com/repos/huggingface/datasets/issues/4565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4565/timeline
closed
false
4,565
null
2022-07-06T19:03:02Z
null
false
1,283,932,333
https://api.github.com/repos/huggingface/datasets/issues/4564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4564/events
[]
null
2022-07-06T09:34:48Z
[]
https://github.com/huggingface/datasets/pull/4564
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Support streaming bookcorpus dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4564/reactions" }
PR_kwDODunzps46UqUN
{ "diff_url": "https://github.com/huggingface/datasets/pull/4564.diff", "html_url": "https://github.com/huggingface/datasets/pull/4564", "merged_at": "2022-07-06T09:23:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4564.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4564" }
2022-06-24T16:13:39Z
https://api.github.com/repos/huggingface/datasets/issues/4564/comments
Support streaming bookcorpus dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4564/timeline
closed
false
4,564
null
2022-07-06T09:23:04Z
null
true
1,283,914,383
https://api.github.com/repos/huggingface/datasets/issues/4563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4563/events
[]
null
2022-06-24T16:54:57Z
[]
https://github.com/huggingface/datasets/pull/4563
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Support streaming allocine dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4563/reactions" }
PR_kwDODunzps46UmZQ
{ "diff_url": "https://github.com/huggingface/datasets/pull/4563.diff", "html_url": "https://github.com/huggingface/datasets/pull/4563", "merged_at": "2022-06-24T16:44:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4563.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4563" }
2022-06-24T15:55:03Z
https://api.github.com/repos/huggingface/datasets/issues/4563/comments
Support streaming allocine dataset. Fix #4562.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://api.github.com/repos/huggingface/datasets/issues/4563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4563/timeline
closed
false
4,563
null
2022-06-24T16:44:41Z
null
true
1,283,779,557
https://api.github.com/repos/huggingface/datasets/issues/4562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4562/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-27T06:39:32Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
https://github.com/huggingface/datasets/issues/4562
MEMBER
completed
null
null
[ "I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n", "Let me have a look...", "Thanks for the quick fix @albertvillanova ", "Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).", "> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)" ]
Dataset Viewer issue for allocine
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4562/reactions" }
I_kwDODunzps5MhOvl
null
2022-06-24T13:50:38Z
https://api.github.com/repos/huggingface/datasets/issues/4562/comments
### Link https://huggingface.co/datasets/allocine ### Description Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed: ``` Status code: 400 Exception: AttributeError Message: 'TarContainedFile' object has no attribute 'readable' ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4562/timeline
closed
false
4,562
null
2022-06-24T16:44:41Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
false
1,283,624,242
https://api.github.com/repos/huggingface/datasets/issues/4561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4561/events
[]
null
2022-06-27T09:37:55Z
[]
https://github.com/huggingface/datasets/pull/4561
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Add evaluation data to acronym_identification
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4561/reactions" }
PR_kwDODunzps46TnVe
{ "diff_url": "https://github.com/huggingface/datasets/pull/4561.diff", "html_url": "https://github.com/huggingface/datasets/pull/4561", "merged_at": "2022-06-27T08:49:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/4561.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4561" }
2022-06-24T11:17:33Z
https://api.github.com/repos/huggingface/datasets/issues/4561/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4561/timeline
closed
false
4,561
null
2022-06-27T08:49:22Z
null
true
1,283,558,873
https://api.github.com/repos/huggingface/datasets/issues/4560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4560/events
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
null
2023-09-24T09:35:32Z
[]
https://github.com/huggingface/datasets/pull/4560
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
Add evaluation metadata to imagenet-1k
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4560/reactions" }
PR_kwDODunzps46TY9n
{ "diff_url": "https://github.com/huggingface/datasets/pull/4560.diff", "html_url": "https://github.com/huggingface/datasets/pull/4560", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4560.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4560" }
2022-06-24T10:12:41Z
https://api.github.com/repos/huggingface/datasets/issues/4560/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4560/timeline
closed
false
4,560
null
2022-09-23T09:37:03Z
null
true
1,283,544,937
https://api.github.com/repos/huggingface/datasets/issues/4559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4559/events
[]
null
2022-06-24T10:54:28Z
[]
https://github.com/huggingface/datasets/pull/4559
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Add action names in schema_guided_dstc8 dataset card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4559/reactions" }
PR_kwDODunzps46TV7-
{ "diff_url": "https://github.com/huggingface/datasets/pull/4559.diff", "html_url": "https://github.com/huggingface/datasets/pull/4559", "merged_at": "2022-06-24T10:43:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/4559.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4559" }
2022-06-24T10:00:01Z
https://api.github.com/repos/huggingface/datasets/issues/4559/comments
As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4559/timeline
closed
false
4,559
null
2022-06-24T10:43:47Z
null
true
1,283,479,650
https://api.github.com/repos/huggingface/datasets/issues/4558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4558/events
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
null
2023-09-24T09:35:39Z
[]
https://github.com/huggingface/datasets/pull/4558
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4558). All of your documentation changes will be reflected on that endpoint.", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
Add evaluation metadata to wmt14
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4558/reactions" }
PR_kwDODunzps46THl_
{ "diff_url": "https://github.com/huggingface/datasets/pull/4558.diff", "html_url": "https://github.com/huggingface/datasets/pull/4558", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4558.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4558" }
2022-06-24T09:08:54Z
https://api.github.com/repos/huggingface/datasets/issues/4558/comments
null
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4558/timeline
closed
false
4,558
null
2022-09-23T09:36:50Z
null
true
1,283,473,889
https://api.github.com/repos/huggingface/datasets/issues/4557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4557/events
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
null
2023-09-24T09:35:49Z
[]
https://github.com/huggingface/datasets/pull/4557
MEMBER
null
false
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4557). All of your documentation changes will be reflected on that endpoint.", "> Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?\r\n\r\nyes :)", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
Add evaluation metadata to wmt16
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4557/reactions" }
PR_kwDODunzps46TGZK
{ "diff_url": "https://github.com/huggingface/datasets/pull/4557.diff", "html_url": "https://github.com/huggingface/datasets/pull/4557", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4557.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4557" }
2022-06-24T09:04:23Z
https://api.github.com/repos/huggingface/datasets/issues/4557/comments
Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4557/timeline
closed
false
4,557
null
2022-09-23T09:36:32Z
null
true
1,283,462,881
https://api.github.com/repos/huggingface/datasets/issues/4556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4556/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-24T09:50:39Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
https://github.com/huggingface/datasets/issues/4556
MEMBER
completed
null
null
[ "Fixed, thanks." ]
Dataset Viewer issue for conll2003
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4556/reactions" }
I_kwDODunzps5MgBbh
null
2022-06-24T08:55:18Z
https://api.github.com/repos/huggingface/datasets/issues/4556/comments
### Link https://huggingface.co/datasets/conll2003/viewer/conll2003/test ### Description Seems like a cache problem with this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll2003/__init__.py' ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4556/timeline
closed
false
4,556
null
2022-06-24T09:50:39Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
false
1,283,451,651
https://api.github.com/repos/huggingface/datasets/issues/4555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4555/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-24T09:50:45Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
https://github.com/huggingface/datasets/issues/4555
MEMBER
completed
null
null
[ "Fixed, thanks." ]
Dataset Viewer issue for xtreme
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4555/reactions" }
I_kwDODunzps5Mf-sD
null
2022-06-24T08:46:08Z
https://api.github.com/repos/huggingface/datasets/issues/4555/comments
### Link https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test ### Description There seems to be a problem with the cache of this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/xtreme/349258adc25bb45e47de193222f95e68a44f7a7ab53c4283b3f007208a11bf7e/xtreme.py' ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/4555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4555/timeline
closed
false
4,555
null
2022-06-24T09:50:45Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
false
1,283,369,453
https://api.github.com/repos/huggingface/datasets/issues/4554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4554/events
[]
null
2022-07-08T15:39:20Z
[]
https://github.com/huggingface/datasets/pull/4554
CONTRIBUTOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Fix WMT dataset loading issue and docs update (Re-opened)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4554/reactions" }
PR_kwDODunzps46Sv_f
{ "diff_url": "https://github.com/huggingface/datasets/pull/4554.diff", "html_url": "https://github.com/huggingface/datasets/pull/4554", "merged_at": "2022-07-08T15:27:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4554.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4554" }
2022-06-24T07:26:16Z
https://api.github.com/repos/huggingface/datasets/issues/4554/comments
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. Let me know, if any additional changes are required. Thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4", "events_url": "https://api.github.com/users/khushmeeet/events{/privacy}", "followers_url": "https://api.github.com/users/khushmeeet/followers", "following_url": "https://api.github.com/users/khushmeeet/following{/other_user}", "gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/khushmeeet", "id": 8711912, "login": "khushmeeet", "node_id": "MDQ6VXNlcjg3MTE5MTI=", "organizations_url": "https://api.github.com/users/khushmeeet/orgs", "received_events_url": "https://api.github.com/users/khushmeeet/received_events", "repos_url": "https://api.github.com/users/khushmeeet/repos", "site_admin": false, "starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions", "type": "User", "url": "https://api.github.com/users/khushmeeet" }
https://api.github.com/repos/huggingface/datasets/issues/4554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4554/timeline
closed
false
4,554
null
2022-07-08T15:27:44Z
null
true
1,282,779,560
https://api.github.com/repos/huggingface/datasets/issues/4553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4553/events
[]
null
2022-07-04T19:00:13Z
[]
https://github.com/huggingface/datasets/pull/4553
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.", "Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from the dataset rather than with `.remove_columns()` to make sure we don't break transforms), and tests are green so we're ready for review!", "@lhoestq Test is in!" ]
Stop dropping columns in to_tf_dataset() before we load batches
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4553/reactions" }
PR_kwDODunzps46Q1q7
{ "diff_url": "https://github.com/huggingface/datasets/pull/4553.diff", "html_url": "https://github.com/huggingface/datasets/pull/4553", "merged_at": "2022-07-04T18:49:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/4553.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4553" }
2022-06-23T18:21:05Z
https://api.github.com/repos/huggingface/datasets/issues/4553/comments
`to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instead drop keys from the batch after we load it. cc @amyeroberts and https://github.com/huggingface/notebooks/pull/202
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
https://api.github.com/repos/huggingface/datasets/issues/4553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4553/timeline
closed
false
4,553
null
2022-07-04T18:49:01Z
null
true
1,282,615,646
https://api.github.com/repos/huggingface/datasets/issues/4552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4552/events
[]
null
2022-06-26T15:49:46Z
[]
https://github.com/huggingface/datasets/pull/4552
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! I updated the two remaining files" ]
Tell users to upload on the hub directly
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 3, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/4552/reactions" }
PR_kwDODunzps46QSHV
{ "diff_url": "https://github.com/huggingface/datasets/pull/4552.diff", "html_url": "https://github.com/huggingface/datasets/pull/4552", "merged_at": "2022-06-26T15:39:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/4552.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4552" }
2022-06-23T15:47:52Z
https://api.github.com/repos/huggingface/datasets/issues/4552/comments
As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs. Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can open a discussion and tag `datasets` maintainers for reviews. Finally I removed the _previous good reasons_ to add a dataset on GitHub to only keep this one: > In some rare cases it makes more sense to open a PR on GitHub. For example when you are not the author of the dataset and there is no clear organization / namespace that you can put the dataset under. Does it sound good to you @albertvillanova @julien-c ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4552/timeline
closed
false
4,552
null
2022-06-26T15:39:11Z
null
true
1,282,534,807
https://api.github.com/repos/huggingface/datasets/issues/4551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4551/events
[]
null
2022-06-30T14:49:20Z
[]
https://github.com/huggingface/datasets/pull/4551
COLLABORATOR
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https://github.com/huggingface/datasets/issues/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they seem to work (will push them on Monday; btw they don't break any of fsspec's tests, so maybe we can contribute this as an enhancement to them). Also, perhaps we should include the files starting with `__` in the results again (we hadn't had issues with this pattern before). WDYT?", "I see. Feel free to merge this one if it's good for you btw :)\r\n\r\n> Also, perhaps we should include the files starting with __ in the results again (we hadn't had issues with this pattern before)\r\n\r\nThe point was mainly to ignore `__pycache__` directories for example. Also also for consistency with the iter_files/iter_archive which are already ignoring them", "Very elegant solution! Feel free to merge if the CI is green after adding the tests.", "CI failure is unrelated to this PR" ]
Perform hidden file check on relative data file path
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4551/reactions" }
PR_kwDODunzps46QAV-
{ "diff_url": "https://github.com/huggingface/datasets/pull/4551.diff", "html_url": "https://github.com/huggingface/datasets/pull/4551", "merged_at": "2022-06-30T14:38:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/4551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4551" }
2022-06-23T14:49:11Z
https://api.github.com/repos/huggingface/datasets/issues/4551/comments
Fix #4549
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://api.github.com/repos/huggingface/datasets/issues/4551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4551/timeline
closed
false
4,551
null
2022-06-30T14:38:18Z
null
true
1,282,374,441
https://api.github.com/repos/huggingface/datasets/issues/4550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4550/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-06-23T13:47:05Z
[]
https://github.com/huggingface/datasets/issues/4550
NONE
completed
null
null
[ "Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n" ]
imdb source error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4550/reactions" }
I_kwDODunzps5Mb3sp
null
2022-06-23T13:02:52Z
https://api.github.com/repos/huggingface/datasets/issues/4550/comments
## Describe the bug imdb dataset not loading ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imdb") ``` ## Expected results ## Actual results ```bash 06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source 06/23/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz timed out, retrying... [1.0] ..... ConnectionError: Couldn't reach http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: /~amaas/data/sentiment/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f2d750cf690>, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))"))) ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4", "events_url": "https://api.github.com/users/Muhtasham/events{/privacy}", "followers_url": "https://api.github.com/users/Muhtasham/followers", "following_url": "https://api.github.com/users/Muhtasham/following{/other_user}", "gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Muhtasham", "id": 20128202, "login": "Muhtasham", "node_id": "MDQ6VXNlcjIwMTI4MjAy", "organizations_url": "https://api.github.com/users/Muhtasham/orgs", "received_events_url": "https://api.github.com/users/Muhtasham/received_events", "repos_url": "https://api.github.com/users/Muhtasham/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions", "type": "User", "url": "https://api.github.com/users/Muhtasham" }
https://api.github.com/repos/huggingface/datasets/issues/4550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4550/timeline
closed
false
4,550
null
2022-06-23T13:47:04Z
null
false
1,282,312,975
https://api.github.com/repos/huggingface/datasets/issues/4549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4549/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-06-30T14:38:18Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
https://github.com/huggingface/datasets/issues/4549
MEMBER
completed
null
null
[ "I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`", "We're working on a fix ;)" ]
FileNotFoundError when passing a data_file inside a directory starting with double underscores
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4549/reactions" }
I_kwDODunzps5MbosP
null
2022-06-23T12:19:24Z
https://api.github.com/repos/huggingface/datasets/issues/4549/comments
Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4549/timeline
closed
false
4,549
null
2022-06-30T14:38:18Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
false
1,282,218,096
https://api.github.com/repos/huggingface/datasets/issues/4548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4548/events
[]
null
2022-06-30T10:15:32Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
https://github.com/huggingface/datasets/issues/4548
CONTRIBUTOR
completed
null
null
[ "I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)" ]
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4548/reactions" }
I_kwDODunzps5MbRhw
null
2022-06-23T10:58:57Z
https://api.github.com/repos/huggingface/datasets/issues/4548/comments
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored. This happens when a directory is structured like as follows: ``` train/ file_1.jpg file_2.jpg test/ file_3.jpg file_4.jpg metadata.jsonl ``` or like as follows: ``` train_file_1.jpg train_file_2.jpg test_file_3.jpg test_file_4.jpg metadata.jsonl ``` The same for HF repos. because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29) @lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
https://api.github.com/repos/huggingface/datasets/issues/4548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4548/timeline
closed
false
4,548
null
2022-06-30T10:15:32Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
false
1,282,160,517
https://api.github.com/repos/huggingface/datasets/issues/4547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4547/events
[]
null
2022-06-28T14:10:57Z
[]
https://github.com/huggingface/datasets/pull/4547
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR", "good catch, I thought I resolved them all sorry", "Alright it should be good now" ]
[CI] Fix some warnings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4547/reactions" }
PR_kwDODunzps46Ot5u
{ "diff_url": "https://github.com/huggingface/datasets/pull/4547.diff", "html_url": "https://github.com/huggingface/datasets/pull/4547", "merged_at": "2022-06-28T13:59:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/4547.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4547" }
2022-06-23T10:10:49Z
https://api.github.com/repos/huggingface/datasets/issues/4547/comments
There are some warnings in the CI that are annoying, I tried to remove most of them
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4547/timeline
closed
false
4,547
null
2022-06-28T13:59:54Z
null
true
1,282,093,288
https://api.github.com/repos/huggingface/datasets/issues/4546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4546/events
[]
null
2022-06-23T10:24:16Z
[]
https://github.com/huggingface/datasets/pull/4546
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
[CI] fixing seqeval install in ci by pinning setuptools-scm
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4546/reactions" }
PR_kwDODunzps46Oe_K
{ "diff_url": "https://github.com/huggingface/datasets/pull/4546.diff", "html_url": "https://github.com/huggingface/datasets/pull/4546", "merged_at": "2022-06-23T10:13:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/4546.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4546" }
2022-06-23T09:24:37Z
https://api.github.com/repos/huggingface/datasets/issues/4546/comments
The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work. I fixed this by pinning the version of setuptools-scm in the circleci job Fix https://github.com/huggingface/datasets/issues/4544
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4546/timeline
closed
false
4,546
null
2022-06-23T10:13:44Z
null
true
1,280,899,028
https://api.github.com/repos/huggingface/datasets/issues/4545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4545/events
[]
null
2022-06-28T09:37:06Z
[]
https://github.com/huggingface/datasets/pull/4545
CONTRIBUTOR
null
false
null
[ "> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.", "_The documentation is not available anymore as the PR was closed or merged._" ]
Make DuplicateKeysError more user friendly [For Issue #2556]
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4545/reactions" }
PR_kwDODunzps46KV-y
{ "diff_url": "https://github.com/huggingface/datasets/pull/4545.diff", "html_url": "https://github.com/huggingface/datasets/pull/4545", "merged_at": "2022-06-28T09:26:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4545.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4545" }
2022-06-22T21:01:34Z
https://api.github.com/repos/huggingface/datasets/issues/4545/comments
# What does this PR do? ## Summary *DuplicateKeysError error does not provide any information regarding the examples which have the same the key.* *This information is very helpful for debugging the dataset generator script.* ## Additions - ## Changes - Changed `DuplicateKeysError Class` in `src/datasets/keyhash.py` to add current index and duplicate_key_indices to error message. - Changed `check_duplicate_keys` function in `src/datasets/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found. ## Deletions - ## To do : - [x] Find way to find and print path `<Path to Dataset>` in Error message ## Issues Addressed : Fixes #2556
{ "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath" }
https://api.github.com/repos/huggingface/datasets/issues/4545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4545/timeline
closed
false
4,545
null
2022-06-28T09:26:04Z
null
true
1,280,500,340
https://api.github.com/repos/huggingface/datasets/issues/4544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4544/events
[]
null
2022-06-23T10:13:44Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/4544
MEMBER
completed
null
null
[]
[CI] seqeval installation fails sometimes on python 3.6
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4544/reactions" }
I_kwDODunzps5MUuJ0
null
2022-06-22T16:35:23Z
https://api.github.com/repos/huggingface/datasets/issues/4544/comments
The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail. The installation fails because of this error: ``` Collecting seqeval Downloading seqeval-1.2.2.tar.gz (43 kB) |███████▌ | 10 kB 42.1 MB/s eta 0:00:01 |███████████████ | 20 kB 53.3 MB/s eta 0:00:01 |██████████████████████▌ | 30 kB 67.2 MB/s eta 0:00:01 |██████████████████████████████ | 40 kB 76.1 MB/s eta 0:00:01 |████████████████████████████████| 43 kB 10.0 MB/s Preparing metadata (setup.py) ... - error ERROR: Command errored out with exit status 1: command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/ Complete output (22 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module> 'Programming Language :: Python :: Implementation :: PyPy' File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__ k: v for k, v in attrs.items() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__ self.finalize_options() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options ep.load()(self, ep.name, value) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load return self.resolve() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5 from __future__ import annotations ^ SyntaxError: future feature annotations is not defined ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300 Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT This could be caused by the latest updates of setuptools-scm
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4544/timeline
closed
false
4,544
null
2022-06-23T10:13:44Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
1,280,379,781
https://api.github.com/repos/huggingface/datasets/issues/4543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4543/events
[]
null
2022-06-22T16:37:40Z
[]
https://github.com/huggingface/datasets/pull/4543
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Remaining CI failures are unrelated to this fix, merging" ]
[CI] Fix upstream hub test url
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4543/reactions" }
PR_kwDODunzps46IiEp
{ "diff_url": "https://github.com/huggingface/datasets/pull/4543.diff", "html_url": "https://github.com/huggingface/datasets/pull/4543", "merged_at": "2022-06-22T16:27:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/4543.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4543" }
2022-06-22T15:34:27Z
https://api.github.com/repos/huggingface/datasets/issues/4543/comments
Some tests were still using moon-stagign instead of hub-ci. I also updated the token to use one dedicated to `datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4543/timeline
closed
false
4,543
null
2022-06-22T16:27:37Z
null
true
1,280,269,445
https://api.github.com/repos/huggingface/datasets/issues/4542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4542/events
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
null
2022-10-11T08:45:45Z
[]
https://github.com/huggingface/datasets/issues/4542
MEMBER
null
null
null
[ "This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ", "cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!", "Noted and I will look into the thread in detail tomorrow once I log back in. ", "@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ", "> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok", "So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ", "> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)", "Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ", "@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ", "Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example", "@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```", "@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ", "Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types", "If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.", "> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?", "> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ", "> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^", "Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).", "Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ", "@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?", "> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.", "If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?", "@lhoestq why one would convert to TFRecords after unbatching? ", "> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ", "Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)", "> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ", "I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ", "Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ", "Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330µs/image to 30ms/image)", "Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. " ]
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4542/reactions" }
I_kwDODunzps5MT1yF
null
2022-06-22T14:42:00Z
https://api.github.com/repos/huggingface/datasets/issues/4542/comments
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory. It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library. Here are a few points to explore - [ ] check the performance of ArrowFeatherDataset in tf.data - [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc. We would also need to implement sharding when loading a dataset (this will be done anyway for #546) cc @Rocketknight1 @gante feel free to comment in case I missed anything ! I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4542/timeline
open
false
4,542
null
null
null
false
1,280,161,436
https://api.github.com/repos/huggingface/datasets/issues/4541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4541/events
[]
null
2022-06-22T16:39:27Z
[]
https://github.com/huggingface/datasets/pull/4541
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI failures are unrelated to this PR, merging" ]
Fix timestamp conversion from Pandas to Python datetime in streaming mode
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4541/reactions" }
PR_kwDODunzps46HyPK
{ "diff_url": "https://github.com/huggingface/datasets/pull/4541.diff", "html_url": "https://github.com/huggingface/datasets/pull/4541", "merged_at": "2022-06-22T16:29:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/4541.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4541" }
2022-06-22T13:40:01Z
https://api.github.com/repos/huggingface/datasets/issues/4541/comments
Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays. However a timestamp array is always converted to datetime.datetime objects. This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.timestamp in streaming. I fixed this by always converting pd.Timestamp to datetime.datetime during the example encoding step. I fixed the same issue for pd.Timedelta as well. Finally I added an extra step of conversion for Series and DataFrame to take this into account in case such data are passed as Series or DataFrame. Fix https://github.com/huggingface/datasets/issues/4533 Related to https://github.com/huggingface/datasets-server/issues/397
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4541/timeline
closed
false
4,541
null
2022-06-22T16:29:09Z
null
true
1,280,142,942
https://api.github.com/repos/huggingface/datasets/issues/4540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4540/events
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
null
2022-07-07T13:17:44Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath" } ]
https://github.com/huggingface/datasets/issues/4540
NONE
completed
null
null
[ "Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)", "I will have a look.. \r\n\r\nThis weekend .. ", "@albertvillanova , Can you have a look at #4590. \r\n\r\nThanks ", "#self-assign" ]
Avoid splitting by` .py` for the file.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4540/reactions" }
I_kwDODunzps5MTW5e
null
2022-06-22T13:26:55Z
https://api.github.com/repos/huggingface/datasets/issues/4540/comments
https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272 Hello, Thanks you for this library . I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory. Step to reproduce. - If you have a home folder which ends with `.py` - load a module with a local folder `qa_dataset = load_dataset("src/data/build_qa_dataset.py")` it is failed A possible workaround would be to use pathlib at the mentioned line ` meta_path = Path(importable_local_file).parent.joinpath("metadata.json")` this can alivate the issue . Let me what are your thought on this and I can try to fix it by A PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/18573157?v=4", "events_url": "https://api.github.com/users/espoirMur/events{/privacy}", "followers_url": "https://api.github.com/users/espoirMur/followers", "following_url": "https://api.github.com/users/espoirMur/following{/other_user}", "gists_url": "https://api.github.com/users/espoirMur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/espoirMur", "id": 18573157, "login": "espoirMur", "node_id": "MDQ6VXNlcjE4NTczMTU3", "organizations_url": "https://api.github.com/users/espoirMur/orgs", "received_events_url": "https://api.github.com/users/espoirMur/received_events", "repos_url": "https://api.github.com/users/espoirMur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/espoirMur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/espoirMur/subscriptions", "type": "User", "url": "https://api.github.com/users/espoirMur" }
https://api.github.com/repos/huggingface/datasets/issues/4540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4540/timeline
closed
false
4,540
null
2022-07-07T13:17:44Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath" }
false
1,279,779,829
https://api.github.com/repos/huggingface/datasets/issues/4539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4539/events
[]
null
2022-06-22T13:43:23Z
[]
https://github.com/huggingface/datasets/pull/4539
CONTRIBUTOR
null
false
null
[]
Replace deprecated logging.warn with logging.warning
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4539/reactions" }
PR_kwDODunzps46GfWv
{ "diff_url": "https://github.com/huggingface/datasets/pull/4539.diff", "html_url": "https://github.com/huggingface/datasets/pull/4539", "merged_at": "2022-06-22T12:51:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4539.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4539" }
2022-06-22T08:32:29Z
https://api.github.com/repos/huggingface/datasets/issues/4539/comments
Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)). * https://docs.python.org/3/library/logging.html#logging.Logger.warning * https://github.com/python/cpython/issues/57444
{ "avatar_url": "https://avatars.githubusercontent.com/u/1324225?v=4", "events_url": "https://api.github.com/users/hugovk/events{/privacy}", "followers_url": "https://api.github.com/users/hugovk/followers", "following_url": "https://api.github.com/users/hugovk/following{/other_user}", "gists_url": "https://api.github.com/users/hugovk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hugovk", "id": 1324225, "login": "hugovk", "node_id": "MDQ6VXNlcjEzMjQyMjU=", "organizations_url": "https://api.github.com/users/hugovk/orgs", "received_events_url": "https://api.github.com/users/hugovk/received_events", "repos_url": "https://api.github.com/users/hugovk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hugovk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hugovk/subscriptions", "type": "User", "url": "https://api.github.com/users/hugovk" }
https://api.github.com/repos/huggingface/datasets/issues/4539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4539/timeline
closed
false
4,539
null
2022-06-22T12:51:51Z
null
true
1,279,409,786
https://api.github.com/repos/huggingface/datasets/issues/4538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4538/events
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
null
2022-06-27T07:30:23Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
https://github.com/huggingface/datasets/issues/4538
NONE
completed
null
null
[ "Hi @Breakend, yes – we'll propose a solution today", "Thanks so much, I appreciate it!", "Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!", "Awesome! Thanks for confirming. cc @severo ", "Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n" ]
Dataset Viewer issue for Pile of Law
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions" }
I_kwDODunzps5MQj56
null
2022-06-22T02:48:40Z
https://api.github.com/repos/huggingface/datasets/issues/4538/comments
### Link https://huggingface.co/datasets/pile-of-law/pile-of-law ### Description Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information? Thanks so much! ### Owner Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4", "events_url": "https://api.github.com/users/Breakend/events{/privacy}", "followers_url": "https://api.github.com/users/Breakend/followers", "following_url": "https://api.github.com/users/Breakend/following{/other_user}", "gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Breakend", "id": 1609857, "login": "Breakend", "node_id": "MDQ6VXNlcjE2MDk4NTc=", "organizations_url": "https://api.github.com/users/Breakend/orgs", "received_events_url": "https://api.github.com/users/Breakend/received_events", "repos_url": "https://api.github.com/users/Breakend/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Breakend/subscriptions", "type": "User", "url": "https://api.github.com/users/Breakend" }
https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4538/timeline
closed
false
4,538
null
2022-06-26T22:26:22Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
false
1,279,144,310
https://api.github.com/repos/huggingface/datasets/issues/4537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4537/events
[]
null
2022-06-24T07:05:43Z
[]
https://github.com/huggingface/datasets/pull/4537
CONTRIBUTOR
null
false
null
[ "The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream git@github.com:huggingface/datasets.git\r\ngit pull --ff-only upstream master\r\ngit checkout -b wmt-datasets-fix2\r\ngit cherry-pick f2d6c995d5153131168f64fc60fe33a7813739a4 a9fdead5f435aeb88c237600be28eb8d4fde4c55\r\n```", "Closing this PR due to unwanted commit changes. Will be opening new PR for the same issue." ]
Fix WMT dataset loading issue and docs update
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4537/reactions" }
PR_kwDODunzps46ESJn
{ "diff_url": "https://github.com/huggingface/datasets/pull/4537.diff", "html_url": "https://github.com/huggingface/datasets/pull/4537", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4537.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4537" }
2022-06-21T21:48:02Z
https://api.github.com/repos/huggingface/datasets/issues/4537/comments
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that. Let me know, if any additional changes are required. Thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4", "events_url": "https://api.github.com/users/khushmeeet/events{/privacy}", "followers_url": "https://api.github.com/users/khushmeeet/followers", "following_url": "https://api.github.com/users/khushmeeet/following{/other_user}", "gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/khushmeeet", "id": 8711912, "login": "khushmeeet", "node_id": "MDQ6VXNlcjg3MTE5MTI=", "organizations_url": "https://api.github.com/users/khushmeeet/orgs", "received_events_url": "https://api.github.com/users/khushmeeet/received_events", "repos_url": "https://api.github.com/users/khushmeeet/repos", "site_admin": false, "starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions", "type": "User", "url": "https://api.github.com/users/khushmeeet" }
https://api.github.com/repos/huggingface/datasets/issues/4537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4537/timeline
closed
false
4,537
null
2022-06-24T07:05:10Z
null
true
1,278,734,727
https://api.github.com/repos/huggingface/datasets/issues/4536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4536/events
[]
null
2022-06-28T10:46:51Z
[]
https://github.com/huggingface/datasets/pull/4536
MEMBER
null
false
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
Properly raise FileNotFound even if the dataset is private
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4536/reactions" }
PR_kwDODunzps46C2z6
{ "diff_url": "https://github.com/huggingface/datasets/pull/4536.diff", "html_url": "https://github.com/huggingface/datasets/pull/4536", "merged_at": "2022-06-28T10:36:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/4536.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4536" }
2022-06-21T17:05:50Z
https://api.github.com/repos/huggingface/datasets/issues/4536/comments
`tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub. Moreover when use_auth_token is not set (default is False), we should not pass `token=None` to HfApi.dataset_info, or it will use the local token by default - instead it should use no token. It's currently not possible to ask for no token to be used, so as a workaround I simply set token="no-token"
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/4536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4536/timeline
closed
false
4,536
null
2022-06-28T10:36:10Z
null
true
1,278,365,039
https://api.github.com/repos/huggingface/datasets/issues/4535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4535/events
[]
null
2022-06-27T16:25:09Z
[]
https://github.com/huggingface/datasets/pull/4535
MEMBER
null
false
null
[ "Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/config.py#L183\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps://github.com/huggingface/datasets/blob/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640/src/datasets/arrow_dataset.py#L1122", "_The documentation is not available anymore as the PR was closed or merged._", "> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)", "Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!", "CI failures are unrelated to this PR and fixed on master, merging" ]
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4535/reactions" }
PR_kwDODunzps46BnXq
{ "diff_url": "https://github.com/huggingface/datasets/pull/4535.diff", "html_url": "https://github.com/huggingface/datasets/pull/4535", "merged_at": "2022-06-27T16:14:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/4535.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4535" }
2022-06-21T12:18:49Z
https://api.github.com/repos/huggingface/datasets/issues/4535/comments
Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`. This is useful so as to tweak the `batch_size` according to the VM specifications.
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
https://api.github.com/repos/huggingface/datasets/issues/4535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4535/timeline
closed
false
4,535
null
2022-06-27T16:14:36Z
null
true
1,277,897,197
https://api.github.com/repos/huggingface/datasets/issues/4534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4534/events
[]
null
2022-06-23T14:33:54Z
[]
https://github.com/huggingface/datasets/pull/4534
NONE
null
false
null
[ "Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent 😃 ", "Thanks, we will update the guide ;)" ]
Add `tldr_news` dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4534/reactions" }
PR_kwDODunzps46AFK_
{ "diff_url": "https://github.com/huggingface/datasets/pull/4534.diff", "html_url": "https://github.com/huggingface/datasets/pull/4534", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4534.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4534" }
2022-06-21T05:02:43Z
https://api.github.com/repos/huggingface/datasets/issues/4534/comments
This PR aims at adding support for a news dataset: `tldr news`. This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter.
{ "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JulesBelveze", "id": 32683010, "login": "JulesBelveze", "node_id": "MDQ6VXNlcjMyNjgzMDEw", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "type": "User", "url": "https://api.github.com/users/JulesBelveze" }
https://api.github.com/repos/huggingface/datasets/issues/4534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4534/timeline
closed
false
4,534
null
2022-06-21T14:21:11Z
null
true