state
stringclasses
2 values
created_at
stringlengths
20
20
active_lock_reason
null
url
stringlengths
61
61
assignee
dict
reactions
dict
draft
bool
2 classes
labels_url
stringlengths
75
75
user
dict
html_url
stringlengths
49
51
assignees
list
locked
bool
1 class
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
milestone
dict
comments
sequence
state_reason
stringclasses
3 values
labels
list
title
stringlengths
1
290
author_association
stringclasses
3 values
timeline_url
stringlengths
70
70
body
stringlengths
0
228k
repository_url
stringclasses
1 value
pull_request
dict
id
int64
773M
2.11B
comments_url
stringlengths
70
70
node_id
stringlengths
18
32
performed_via_github_app
null
number
int64
1.62k
6.64k
events_url
stringlengths
68
68
is_pull_request
bool
2 classes
closed
2021-02-22T15:22:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/1924
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4", "events_url": "https://api.github.com/users/PierreColombo/events{/privacy}", "followers_url": "https://api.github.com/users/PierreColombo/followers", "following_url": "https://api.github.com/users/PierreColombo/following{/other_user}", "gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PierreColombo", "id": 22492839, "login": "PierreColombo", "node_id": "MDQ6VXNlcjIyNDkyODM5", "organizations_url": "https://api.github.com/users/PierreColombo/orgs", "received_events_url": "https://api.github.com/users/PierreColombo/received_events", "repos_url": "https://api.github.com/users/PierreColombo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions", "type": "User", "url": "https://api.github.com/users/PierreColombo" }
https://github.com/huggingface/datasets/issues/1924
[]
false
2022-10-05T13:07:11Z
2022-10-05T13:07:11Z
null
[ "Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok", "Hello,\r\nI would prefer to do the reverse: adding a link to an anonymous paper without the people names/institution in the PR. Would it be conceivable ?\r\nCheers\r\n", "Sure, I think it's ok on our side", "Yup, sounds good!" ]
completed
[]
Anonymous Dataset Addition (i.e Anonymous PR?)
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1924/timeline
Hello, Thanks a lot for your librairy. We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ? Cheers @eusip
https://api.github.com/repos/huggingface/datasets
null
813,599,733
https://api.github.com/repos/huggingface/datasets/issues/1924/comments
MDU6SXNzdWU4MTM1OTk3MzM=
null
1,924
https://api.github.com/repos/huggingface/datasets/issues/1924/events
false
closed
2021-02-22T10:27:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/1923
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1923/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1923
[]
false
2021-02-22T11:22:44Z
2021-02-22T11:22:43Z
null
[]
null
[]
Fix save_to_disk with relative path
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1923/timeline
As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step. I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems. I also fixed the issue with the target path being the temporary path. I added a test case for relative paths as well for save_to_disk. Thanks to @M-Salti for reporting and investigating
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1923.diff", "html_url": "https://github.com/huggingface/datasets/pull/1923", "merged_at": "2021-02-22T11:22:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/1923.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1923" }
813,363,472
https://api.github.com/repos/huggingface/datasets/issues/1923/comments
MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0
null
1,923
https://api.github.com/repos/huggingface/datasets/issues/1923/events
true
open
2021-02-22T05:39:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/1922
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1922/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JieyuZhao", "id": 22306304, "login": "JieyuZhao", "node_id": "MDQ6VXNlcjIyMzA2MzA0", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/JieyuZhao" }
https://github.com/huggingface/datasets/issues/1922
[]
false
2021-02-22T10:35:59Z
null
null
[ "Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias\r\nAlso the homepage url is also mentioned in the wino_bias.py so feel free to update it there as well.\r\n\r\nYou can create a Pull Request directly from the github interface by editing the files you want and submit a PR, or from a local clone of the repository.\r\n\r\nThanks for noticing !" ]
null
[]
How to update the "wino_bias" dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1922/timeline
Hi all, Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that? Thanks!
https://api.github.com/repos/huggingface/datasets
null
813,140,806
https://api.github.com/repos/huggingface/datasets/issues/1922/comments
MDU6SXNzdWU4MTMxNDA4MDY=
null
1,922
https://api.github.com/repos/huggingface/datasets/issues/1922/events
false
closed
2021-02-20T22:04:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/1921
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1921/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1921/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
https://github.com/huggingface/datasets/pull/1921
[]
false
2021-02-22T09:44:10Z
2021-02-22T09:44:10Z
null
[ "@lhoestq - apologies for the multiple PRs, my previous one (#1905) got mangled due to some merge conflicts that I had trouble resolving so I just cherry-picked my changes onto a fresh branch here." ]
null
[]
Standardizing datasets dtypes
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1921/timeline
This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets. This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1921.diff", "html_url": "https://github.com/huggingface/datasets/pull/1921", "merged_at": "2021-02-22T09:44:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/1921.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1921" }
812,716,042
https://api.github.com/repos/huggingface/datasets/issues/1921/comments
MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4
null
1,921
https://api.github.com/repos/huggingface/datasets/issues/1921/events
true
closed
2021-02-20T14:22:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/1920
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1920/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
https://github.com/huggingface/datasets/pull/1920
[]
false
2021-02-22T10:30:11Z
2021-02-22T10:30:11Z
null
[ "So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\r\nwith tempfile.TemporaryDirectory() as requested_tempdir:\r\n squad.save_to_disk(requested_tempdir) # no error\r\n```\r\nand it executes succesfuly without problems.\r\nSo why does it work, but this doesn't?\r\n```python\r\nsquad.save_to_disk(\"./squad\") # error\r\n```\r\nIt's because `save_to_disk` also creates a temporary directory (let's call it `tempdir`), and since `tempdir` and `requested_tempdir` share the same parents, the `Path.joinpath` method [(here)](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/src/datasets/arrow_dataset.py#L469) will keep `requested_tempdir` as it is and the *train* directory will be created under `requested_tempdir` and hence no errors will arise.\r\n\r\nBut in the second case (where we are saving to a local dir), the *train* directory is created under *squad* which in turn is created under `tempdir`, not under `.` (current dir).\r\n\r\nSo, all of this probably doesn't help solving the issue but it might help creating a better test, and it also makes me wonder why are we saving to a temporary dir in `save_to_disk` anyway? I mean, won't it be removed with all its contents upon execution completion? what's the point then? ", "CLosing in favor of #1923" ]
null
[]
Fix save_to_disk issue
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1920/timeline
Fixes #1919
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1920.diff", "html_url": "https://github.com/huggingface/datasets/pull/1920", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1920.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1920" }
812,628,220
https://api.github.com/repos/huggingface/datasets/issues/1920/comments
MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2
null
1,920
https://api.github.com/repos/huggingface/datasets/issues/1920/events
true
closed
2021-02-20T14:18:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/1919
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
https://github.com/huggingface/datasets/issues/1919
[]
false
2021-03-03T17:40:27Z
2021-03-03T17:40:27Z
null
[ "Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !", "Closing since this has been fixed by #1923" ]
completed
[]
Failure to save with save_to_disk
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1919/timeline
When I try to save a dataset locally using the `save_to_disk` method I get the error: ```bash FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow' ``` To replicate: 1. Install `datasets` from master 2. Run this code: ```python from datasets import load_dataset squad = load_dataset("squad") # or any other dataset squad.save_to_disk("squad") # error here ``` The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves. I'll open a PR soon doing that and linking this issue.
https://api.github.com/repos/huggingface/datasets
null
812,626,872
https://api.github.com/repos/huggingface/datasets/issues/1919/comments
MDU6SXNzdWU4MTI2MjY4NzI=
null
1,919
https://api.github.com/repos/huggingface/datasets/issues/1919/events
false
closed
2021-02-20T07:32:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/1918
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1918/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/M-Salti", "id": 9285264, "login": "M-Salti", "node_id": "MDQ6VXNlcjkyODUyNjQ=", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "repos_url": "https://api.github.com/users/M-Salti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "type": "User", "url": "https://api.github.com/users/M-Salti" }
https://github.com/huggingface/datasets/pull/1918
[]
false
2021-02-22T13:35:06Z
2021-02-22T13:35:06Z
null
[]
null
[]
Fix QA4MRE download URLs
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1918/timeline
The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1918.diff", "html_url": "https://github.com/huggingface/datasets/pull/1918", "merged_at": "2021-02-22T13:35:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1918.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1918" }
812,541,510
https://api.github.com/repos/huggingface/datasets/issues/1918/comments
MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0
null
1,918
https://api.github.com/repos/huggingface/datasets/issues/1918/events
true
closed
2021-02-19T22:13:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/1917
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4", "events_url": "https://api.github.com/users/yosiasz/events{/privacy}", "followers_url": "https://api.github.com/users/yosiasz/followers", "following_url": "https://api.github.com/users/yosiasz/following{/other_user}", "gists_url": "https://api.github.com/users/yosiasz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yosiasz", "id": 900951, "login": "yosiasz", "node_id": "MDQ6VXNlcjkwMDk1MQ==", "organizations_url": "https://api.github.com/users/yosiasz/orgs", "received_events_url": "https://api.github.com/users/yosiasz/received_events", "repos_url": "https://api.github.com/users/yosiasz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yosiasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yosiasz/subscriptions", "type": "User", "url": "https://api.github.com/users/yosiasz" }
https://github.com/huggingface/datasets/issues/1917
[]
false
2021-02-19T22:41:11Z
2021-02-19T22:40:28Z
null
[ "upgraded to php 3.9.2 and it works!" ]
completed
[]
UnicodeDecodeError: windows 10 machine
NONE
https://api.github.com/repos/huggingface/datasets/issues/1917/timeline
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined> ```
https://api.github.com/repos/huggingface/datasets
null
812,390,178
https://api.github.com/repos/huggingface/datasets/issues/1917/comments
MDU6SXNzdWU4MTIzOTAxNzg=
null
1,917
https://api.github.com/repos/huggingface/datasets/issues/1917/events
false
closed
2021-02-19T19:51:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/1916
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1916
[]
false
2021-02-22T14:56:56Z
2021-02-22T13:32:49Z
null
[ "Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?", "Sorry @lhoestq, I forgot to update the imports... :/", "It's fine, the CI should have caught this tbh. Not sure why it did't fail" ]
null
[]
Remove unused py_utils objects
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1916/timeline
Remove unused/unnecessary py_utils functions/classes.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1916.diff", "html_url": "https://github.com/huggingface/datasets/pull/1916", "merged_at": "2021-02-22T13:32:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1916.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1916" }
812,291,984
https://api.github.com/repos/huggingface/datasets/issues/1916/comments
MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5
null
1,916
https://api.github.com/repos/huggingface/datasets/issues/1916/events
true
closed
2021-02-19T18:11:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/1915
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4", "events_url": "https://api.github.com/users/nitarakad/events{/privacy}", "followers_url": "https://api.github.com/users/nitarakad/followers", "following_url": "https://api.github.com/users/nitarakad/following{/other_user}", "gists_url": "https://api.github.com/users/nitarakad/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nitarakad", "id": 18504534, "login": "nitarakad", "node_id": "MDQ6VXNlcjE4NTA0NTM0", "organizations_url": "https://api.github.com/users/nitarakad/orgs", "received_events_url": "https://api.github.com/users/nitarakad/received_events", "repos_url": "https://api.github.com/users/nitarakad/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nitarakad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nitarakad/subscriptions", "type": "User", "url": "https://api.github.com/users/nitarakad" }
https://github.com/huggingface/datasets/issues/1915
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-03-03T17:40:48Z
2021-03-03T17:40:48Z
null
[ "Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix", "I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !", "Closing since this has been fixed by #1925" ]
completed
[]
Unable to download `wiki_dpr`
NONE
https://api.github.com/repos/huggingface/datasets/issues/1915/timeline
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}` I tried adding in flags `with_embeddings=False` and `with_index=False`: `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")` But I got the following error: `raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}` Is there anything else I need to set to download the dataset? **UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
https://api.github.com/repos/huggingface/datasets
null
812,229,654
https://api.github.com/repos/huggingface/datasets/issues/1915/comments
MDU6SXNzdWU4MTIyMjk2NTQ=
null
1,915
https://api.github.com/repos/huggingface/datasets/issues/1915/events
false
closed
2021-02-19T16:12:34Z
null
https://api.github.com/repos/huggingface/datasets/issues/1914
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1914/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1914
[]
false
2021-02-21T19:48:03Z
2021-02-21T19:48:03Z
null
[]
null
[]
Fix logging imports and make all datasets use library logger
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1914/timeline
Fix library relative logging imports and make all datasets use library logger.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1914.diff", "html_url": "https://github.com/huggingface/datasets/pull/1914", "merged_at": "2021-02-21T19:48:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/1914.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1914" }
812,149,201
https://api.github.com/repos/huggingface/datasets/issues/1914/comments
MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz
null
1,914
https://api.github.com/repos/huggingface/datasets/issues/1914/events
true
closed
2021-02-19T15:43:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/1913
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1913/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1913
[]
false
2021-02-19T18:36:12Z
2021-02-19T18:36:11Z
null
[ "Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?", "Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the documentation to explain this", "Perfect!" ]
null
[]
Add keep_linebreaks parameter to text loader
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1913/timeline
As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset. cc @sgugger @jncasey
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1913.diff", "html_url": "https://github.com/huggingface/datasets/pull/1913", "merged_at": "2021-02-19T18:36:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1913.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1913" }
812,127,307
https://api.github.com/repos/huggingface/datasets/issues/1913/comments
MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw
null
1,913
https://api.github.com/repos/huggingface/datasets/issues/1913/events
true
closed
2021-02-19T13:42:34Z
null
https://api.github.com/repos/huggingface/datasets/issues/1912
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 4, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1912
[]
false
2021-02-24T13:44:53Z
2021-02-24T13:44:53Z
null
[ "So much better - thank you for doing that, @lhoestq!", "Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893", "Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well." ]
null
[]
Update: WMT - use mirror links
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1912/timeline
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1912.diff", "html_url": "https://github.com/huggingface/datasets/pull/1912", "merged_at": "2021-02-24T13:44:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1912.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1912" }
812,034,140
https://api.github.com/repos/huggingface/datasets/issues/1912/comments
MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx
null
1,912
https://api.github.com/repos/huggingface/datasets/issues/1912/events
true
open
2021-02-19T13:09:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/1911
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4", "events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}", "followers_url": "https://api.github.com/users/ayubSubhaniya/followers", "following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}", "gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ayubSubhaniya", "id": 20911334, "login": "ayubSubhaniya", "node_id": "MDQ6VXNlcjIwOTExMzM0", "organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs", "received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events", "repos_url": "https://api.github.com/users/ayubSubhaniya/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions", "type": "User", "url": "https://api.github.com/users/ayubSubhaniya" }
https://github.com/huggingface/datasets/issues/1911
[]
false
2021-02-23T07:34:44Z
null
null
[ "@thomwolf @lhoestq can you guys please take a look and recommend some solution.", "am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSystem` or any implementation of ``fsspec.spec.AbstractFileSystem``.\r\n\r\n Args:\r\n dataset_path (``str``): path (e.g. ``dataset/train``) or remote uri (e.g. ``s3://my-bucket/dataset/train``) of the dataset directory where the dataset will be saved to\r\n fs (Optional[:class:`datasets.filesystem.S3FileSystem`,``fsspec.spec.AbstractFileSystem``], `optional`, defaults ``None``): instance of :class:`datasets.filesystem.S3FileSystem` or ``fsspec.spec.AbstractFileSystem`` used to download the files from remote filesystem.\r\n \"\"\"\r\n assert (\r\n not self.list_indexes()\r\n ), \"please remove all the indexes using `dataset.drop_index` before saving a dataset\"\r\n self = pickle.loads(pickle.dumps(self))\r\n ```", "It's been 24 hours and sadly it's still running. With not a single byte written", "Tried finding the root cause but was unsuccessful.\r\nI am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute.", "Hi ! This very probably comes from the hack you used.\r\n\r\nThe pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you altered the pyarrow table.\r\nI would suggest you to rebuild a valid Dataset object from your new pyarrow table. To do so you must first save your new table to a file, and then make a new Dataset object from that arrow file.\r\n\r\nYou can save the raw arrow table (without all the `datasets.Datasets` metadata) by calling `map` with `cache_file_name=\"path/to/outut.arrow\"` and `function=None`. Having `function=None` makes the `map` write your dataset on disk with no data transformation.\r\n\r\nOnce you have your new arrow file, load it with `datasets.Dataset.from_file` to have a brand new Dataset object :)\r\n\r\nIn the future we'll have a better support for the fast filtering method from pyarrow so you don't have to do this very unpractical workaround. Since it breaks somes assumptions regarding the core behavior of Dataset objects, this is very discouraged.", "Thanks, @lhoestq for your response. Will try your solution and let you know." ]
null
[]
Saving processed dataset running infinitely
NONE
https://api.github.com/repos/huggingface/datasets/issues/1911/timeline
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796) ```dataset._data = dataset._data.filter(...)``` It took 1 hr for the filter. Then i use `save_to_disk()` on processed dataset and it is running forever. I have been waiting since 8 hrs, it has not written a single byte. Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`. Second process is the one. <img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png"> I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function.
https://api.github.com/repos/huggingface/datasets
null
812,009,956
https://api.github.com/repos/huggingface/datasets/issues/1911/comments
MDU6SXNzdWU4MTIwMDk5NTY=
null
1,911
https://api.github.com/repos/huggingface/datasets/issues/1911/events
false
closed
2021-02-19T05:12:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/1910
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4", "events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}", "followers_url": "https://api.github.com/users/ZihanWangKi/followers", "following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}", "gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZihanWangKi", "id": 21319243, "login": "ZihanWangKi", "node_id": "MDQ6VXNlcjIxMzE5MjQz", "organizations_url": "https://api.github.com/users/ZihanWangKi/orgs", "received_events_url": "https://api.github.com/users/ZihanWangKi/received_events", "repos_url": "https://api.github.com/users/ZihanWangKi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions", "type": "User", "url": "https://api.github.com/users/ZihanWangKi" }
https://github.com/huggingface/datasets/pull/1910
[]
false
2021-03-04T22:02:47Z
2021-03-04T22:02:47Z
null
[ "It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch." ]
null
[]
Adding CoNLLpp dataset.
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1910/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1910.diff", "html_url": "https://github.com/huggingface/datasets/pull/1910", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1910.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1910" }
811,697,108
https://api.github.com/repos/huggingface/datasets/issues/1910/comments
MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3
null
1,910
https://api.github.com/repos/huggingface/datasets/issues/1910/events
true
closed
2021-02-18T22:25:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/1907
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francisco-perez-sorrosal", "id": 918006, "login": "francisco-perez-sorrosal", "node_id": "MDQ6VXNlcjkxODAwNg==", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "type": "User", "url": "https://api.github.com/users/francisco-perez-sorrosal" }
https://github.com/huggingface/datasets/issues/1907
[]
false
2021-02-22T23:22:05Z
2021-02-22T23:22:04Z
null
[ "Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe error says that the checksum of the downloaded file doesn't match because google drive returns a text file with the \"Quota Exceeded\" error instead of the actual data file.", "Thanks @lhoestq! Yes, it seems back to normal after a couple of days." ]
completed
[]
DBPedia14 Dataset Checksum bug?
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1907/timeline
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
https://api.github.com/repos/huggingface/datasets
null
811,520,569
https://api.github.com/repos/huggingface/datasets/issues/1907/comments
MDU6SXNzdWU4MTE1MjA1Njk=
null
1,907
https://api.github.com/repos/huggingface/datasets/issues/1907/events
false
open
2021-02-18T19:46:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/1906
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
https://github.com/huggingface/datasets/issues/1906
[]
false
2021-02-23T14:38:50Z
null
null
[ "We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corresponds to `pa.int64()` in pyarrow and `dtype('int64')` in pandas (so the label names are lost during conversions).\r\n\r\nWhat do you think ?", "Now that I've heard you explain ClassLabel, that makes a lot of sense! While DictionaryType for Arrow (I think) can have arbitrarily typed keys, so it won't cover all potential cases, pandas' Category is *probably* the most common use for that pyarrow type, and ClassLabel should match that perfectly?\r\n\r\nOther thoughts:\r\n\r\n- changing the resulting patype on ClassLabel might be backward-incompatible? I'm not totally sure if users of the `datasets` library tend to directly access the `patype` attribute (I don't think we really do, but we haven't been using it for very long yet).\r\n- would ClassLabel's dtype change to `dict[int64, string]`? It seems like in practice a ClassLabel (when not explicitly specified) would be constructed from the DictionaryType branch of `generate_from_arrow_type`, so it's not totally clear to me that anyone ever actually accesses/uses that dtype?\r\n- I don't quite know how `.int2str` and `.str2int` are used in practice - would those be kept? Perhaps the implementation might actually be substantially smaller if we can just delegate to pyarrow's dict methods?\r\n\r\nAnother idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nIn practice, I don't think this would be backward-incompatible in a way anyone would care about since the current behavior just throws an exception, and this way, we could support *reading* a pandas Categorical into a `Dataset` as a ClassLabel. I *think* from there, while it would require some custom glue it wouldn't be too hard to convert the ClassLabel into a pandas Category if we want to go back - I think this would improve on the current behavior without risking changing the behavior of ClassLabel in a backward-incompat way.\r\n\r\nThoughts? I'm not sure if this is overly cautious. Whichever approach you think is better, I'd be happy to take it on!\r\n", "I think we can first keep the int64 precision but with an arrow Dictionary for ClassLabel, and focus on the connection with arrow and pandas.\r\n\r\nIn this scope, I really like the idea of checking for the dictionary type:\r\n\r\n> Another idea that just occurred to me: add a branch in here to generate a ClassLabel if the dict key is int64 and the values are string: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L932 , and then don't touch anything else.\r\n\r\nThis looks like a great start.\r\n\r\nThen as you said we'd have to add the conversion from classlabel to the correct arrow dictionary type. Arrow is already able to convert from arrow Dictionary to pandas Categorical so it should be enough.\r\n\r\nI can see two things that we must take case of to make this change backward compatible:\r\n- first we must still be able to load an arrow file with arrow int64 dtype and `datasets` ClassLabel type without crashing. This can be fixed by casting the arrow int64 array to an arrow Dictionary array on-the-fly when loading the table in the ArrowReader.\r\n- then we still have to return integers when accessing examples from a ClassLabel column. Currently it would return the strings values since it's based on the pandas behavior for converting from pandas to python/numpy. To do so we just have to adapt the python/numpy extractors in formatting.py (it takes care of converting an arrow table to a dictionary of python objects by doing arrow table -> pandas dataframe -> python dictionary)\r\n\r\nAny help on this matter is very much welcome :)" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
Feature Request: Support for Pandas `Categorical`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1906/timeline
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table ``` I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`? e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept: ``` index_type = generate_from_arrow_type(pa_type.index_type) value_type = generate_from_arrow_type(pa_type.value_type) ``` and then additional code points to modify: - FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694 - A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719 - I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755 - Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775 I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints.
https://api.github.com/repos/huggingface/datasets
null
811,405,274
https://api.github.com/repos/huggingface/datasets/issues/1906/comments
MDU6SXNzdWU4MTE0MDUyNzQ=
null
1,906
https://api.github.com/repos/huggingface/datasets/issues/1906/events
false
closed
2021-02-18T19:15:31Z
null
https://api.github.com/repos/huggingface/datasets/issues/1905
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
https://github.com/huggingface/datasets/pull/1905
[]
false
2021-02-20T22:01:30Z
2021-02-20T22:01:30Z
null
[ "Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly." ]
null
[]
Standardizing datasets.dtypes
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1905/timeline
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1905.diff", "html_url": "https://github.com/huggingface/datasets/pull/1905", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1905.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1905" }
811,384,174
https://api.github.com/repos/huggingface/datasets/issues/1905/comments
MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1
null
1,905
https://api.github.com/repos/huggingface/datasets/issues/1905/events
true
closed
2021-02-18T16:30:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/1904
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1904
[]
false
2021-02-18T17:10:03Z
2021-02-18T17:10:01Z
null
[ "Thanks!" ]
null
[]
Fix to_pandas for boolean ArrayXD
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1904/timeline
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 cc @SBrandeis
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1904.diff", "html_url": "https://github.com/huggingface/datasets/pull/1904", "merged_at": "2021-02-18T17:10:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1904" }
811,260,904
https://api.github.com/repos/huggingface/datasets/issues/1904/comments
MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0
null
1,904
https://api.github.com/repos/huggingface/datasets/issues/1904/events
true
closed
2021-02-18T14:23:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/1903
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4", "events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}", "followers_url": "https://api.github.com/users/vrindaprabhu/followers", "following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vrindaprabhu", "id": 16264631, "login": "vrindaprabhu", "node_id": "MDQ6VXNlcjE2MjY0NjMx", "organizations_url": "https://api.github.com/users/vrindaprabhu/orgs", "received_events_url": "https://api.github.com/users/vrindaprabhu/received_events", "repos_url": "https://api.github.com/users/vrindaprabhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions", "type": "User", "url": "https://api.github.com/users/vrindaprabhu" }
https://github.com/huggingface/datasets/pull/1903
[]
false
2021-03-01T09:39:12Z
2021-03-01T09:39:12Z
null
[ "@patrickvonplaten could you please review and help me close this PR?", "@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my side!:' Will be more careful from next time! :)\r\n\r\n\r\n" ]
null
[]
Initial commit for the addition of TIMIT dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1903/timeline
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten Requesting your comments, will be happy to address them!
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1903.diff", "html_url": "https://github.com/huggingface/datasets/pull/1903", "merged_at": "2021-03-01T09:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/1903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1903" }
811,145,531
https://api.github.com/repos/huggingface/datasets/issues/1903/comments
MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2
null
1,903
https://api.github.com/repos/huggingface/datasets/issues/1903/events
true
closed
2021-02-18T09:42:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1902
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1902
[]
false
2021-02-18T09:55:41Z
2021-02-18T09:55:41Z
null
[]
null
[]
Fix setimes_2 wmt urls
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1902/timeline
Continuation of #1901 Some other urls were missing https
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1902.diff", "html_url": "https://github.com/huggingface/datasets/pull/1902", "merged_at": "2021-02-18T09:55:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/1902.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1902" }
810,931,171
https://api.github.com/repos/huggingface/datasets/issues/1902/comments
MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1
null
1,902
https://api.github.com/repos/huggingface/datasets/issues/1902/events
true
closed
2021-02-18T07:39:41Z
null
https://api.github.com/repos/huggingface/datasets/issues/1901
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4", "events_url": "https://api.github.com/users/YangWang92/events{/privacy}", "followers_url": "https://api.github.com/users/YangWang92/followers", "following_url": "https://api.github.com/users/YangWang92/following{/other_user}", "gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YangWang92", "id": 3883941, "login": "YangWang92", "node_id": "MDQ6VXNlcjM4ODM5NDE=", "organizations_url": "https://api.github.com/users/YangWang92/orgs", "received_events_url": "https://api.github.com/users/YangWang92/received_events", "repos_url": "https://api.github.com/users/YangWang92/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions", "type": "User", "url": "https://api.github.com/users/YangWang92" }
https://github.com/huggingface/datasets/pull/1901
[]
false
2021-02-18T15:07:20Z
2021-02-18T09:39:21Z
null
[]
null
[]
Fix OPUS dataset download errors
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1901/timeline
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1901.diff", "html_url": "https://github.com/huggingface/datasets/pull/1901", "merged_at": "2021-02-18T09:39:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1901.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1901" }
810,845,605
https://api.github.com/repos/huggingface/datasets/issues/1901/comments
MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy
null
1,901
https://api.github.com/repos/huggingface/datasets/issues/1901/events
true
closed
2021-02-17T20:26:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/1900
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
https://github.com/huggingface/datasets/pull/1900
[]
false
2021-02-19T18:27:11Z
2021-02-19T18:27:11Z
null
[ "OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!" ]
null
[]
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1900/timeline
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant: ``` def __post_init__(self): if self.dtype == "double": # fix inferred type self.dtype = "float64" if self.dtype == "float": # fix inferred type self.dtype = "float32" ``` However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that. The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1900.diff", "html_url": "https://github.com/huggingface/datasets/pull/1900", "merged_at": "2021-02-19T18:27:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1900.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1900" }
810,512,488
https://api.github.com/repos/huggingface/datasets/issues/1900/comments
MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3
null
1,900
https://api.github.com/repos/huggingface/datasets/issues/1900/events
true
closed
2021-02-17T15:53:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/1899
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1899
[]
false
2021-02-17T17:20:49Z
2021-02-17T17:20:49Z
null
[]
null
[]
Fix: ALT - fix duplicated examples in alt-parallel
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1899/timeline
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1899.diff", "html_url": "https://github.com/huggingface/datasets/pull/1899", "merged_at": "2021-02-17T17:20:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1899" }
810,308,332
https://api.github.com/repos/huggingface/datasets/issues/1899/comments
MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4
null
1,899
https://api.github.com/repos/huggingface/datasets/issues/1899/events
true
closed
2021-02-17T12:51:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/1898
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4", "events_url": "https://api.github.com/users/10-zin/events{/privacy}", "followers_url": "https://api.github.com/users/10-zin/followers", "following_url": "https://api.github.com/users/10-zin/following{/other_user}", "gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/10-zin", "id": 33179372, "login": "10-zin", "node_id": "MDQ6VXNlcjMzMTc5Mzcy", "organizations_url": "https://api.github.com/users/10-zin/orgs", "received_events_url": "https://api.github.com/users/10-zin/received_events", "repos_url": "https://api.github.com/users/10-zin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/10-zin/subscriptions", "type": "User", "url": "https://api.github.com/users/10-zin" }
https://github.com/huggingface/datasets/issues/1898
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-02-19T06:18:46Z
2021-02-19T06:18:46Z
null
[ "Thanks for reporting. This looks like a very bad issue. I'm looking into it", "I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch", "Thanks!!! works perfectly in the bleading edge master version", "Closed by #1899" ]
completed
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
ALT dataset has repeating instances in all splits
NONE
https://api.github.com/repos/huggingface/datasets/issues/1898/timeline
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
https://api.github.com/repos/huggingface/datasets
null
810,157,251
https://api.github.com/repos/huggingface/datasets/issues/1898/comments
MDU6SXNzdWU4MTAxNTcyNTE=
null
1,898
https://api.github.com/repos/huggingface/datasets/issues/1898/events
false
closed
2021-02-17T11:48:24Z
null
https://api.github.com/repos/huggingface/datasets/issues/1897
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1897
[]
false
2021-02-17T13:15:16Z
2021-02-17T13:15:15Z
null
[]
null
[]
Fix PandasArrayExtensionArray conversion to native type
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1897/timeline
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna method was wrong 2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray)) I fixed these two issues and now the conversion to native types works, and so is the export to csv. cc @SBrandeis
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1897.diff", "html_url": "https://github.com/huggingface/datasets/pull/1897", "merged_at": "2021-02-17T13:15:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/1897.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1897" }
810,113,263
https://api.github.com/repos/huggingface/datasets/issues/1897/comments
MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy
null
1,897
https://api.github.com/repos/huggingface/datasets/issues/1897/events
true
closed
2021-02-16T20:38:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/1895
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
https://github.com/huggingface/datasets/issues/1895
[]
false
2021-02-19T18:27:11Z
2021-02-19T18:27:11Z
null
[ "Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more context:\r\n\r\nAs you may know we define the features types of a dataset using the `Features` object in combination with feature types like `Value`. For example\r\n```python\r\nfeatures = Features({\r\n \"age\": Value(\"int32\")\r\n})\r\n```\r\nHowever under the hood we are actually using pyarrow to store the data, and so we have a mapping between the feature types of `datasets` and the types of pyarrow.\r\n\r\nFor example, the `Value` feature types are created from a pyarrow type with `Value(str(pa_type))`.\r\nHowever it looks like the conversion back to a pyarrow type doesn't work with `\"timestamp[ns]\"`.\r\nThis is the `string_to_arrow` function you highlighted that does this conversion, so we should fix that.\r\n\r\n", "Thanks for the clarification @lhoestq !\r\n\r\nThis may be a little bit of a stupid question, but I wanted to clarify one more thing before I took a stab at this:\r\n\r\nWhen the features get inferred, I believe they already have a pyarrow schema (https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L234).\r\n\r\nWe then convert it to a string (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) only to convert it back into the arrow type (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L143, and https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L35). Is there a reason for this round-trip?\r\n\r\nI'll open a PR later to add `timestamp` support to `string_to_arrow`, but I'd be curious to understand since it feels like there may be some opportunities to simplify!", "The objective in terms of design is to make it easy to create Features in a pythonic way. So for example we use a string to define a Value type.\r\nThat's why when inferring the Features from an arrow schema we have to find the right string definitions for Value types. I guess we could also have a constructor `Value.from_arrow_type` to avoid recreating the arrow type, but this could create silent errors if the pyarrow type doesn't have a valid mapping with the string definition. The \"round-trip\" is used to enforce that the ground truth is the string definition, not the pyarrow type, and also as a sanity check.\r\n\r\nLet me know if that makes sense ", "OK I think I understand now:\r\n\r\nFeatures are datasets' internal representation of a schema type, distinct from pyarrow's schema.\r\nValue() corresponds to pyarrow's \"primitive\" types (e.g. `int` or `string`, but not things like `list` or `dict`).\r\n`get_nested_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L698) and `generate_from_arrow_type()` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L778) *should* be inverses of each other, and similarly, for the primitive values, `string_to_arrow()` and `Value.__call__` (https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L146) should be inverses of each other?\r\n\r\nThanks for taking the time to answer - I just wanted to make sure I understood before opening a PR so I'm not disrupting anything about how the codebase is expected to work!", "Yes you're totally right :)" ]
completed
[]
Bug Report: timestamp[ns] not recognized
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1895/timeline
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method. Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well! ``` $ pip list # only the relevant libraries/versions datasets 1.2.1 pandas 1.0.3 pyarrow 3.0.0 ```
https://api.github.com/repos/huggingface/datasets
null
809,630,271
https://api.github.com/repos/huggingface/datasets/issues/1895/comments
MDU6SXNzdWU4MDk2MzAyNzE=
null
1,895
https://api.github.com/repos/huggingface/datasets/issues/1895/events
false
open
2021-02-16T20:04:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/1894
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://github.com/huggingface/datasets/issues/1894
[]
false
2021-02-17T18:52:28Z
null
null
[ "Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for reading text, even though I found recently that we could still slightly improve speed for big datasets (see [here](https://github.com/huggingface/datasets/issues/1803)).\r\n\r\nIn terms of number of examples and example sizes, the only limit is the available disk space you have.\r\n\r\nI haven't used `psrecord` yet but it seems to be a very interesting tool for benchmarking. Currently for benchmarks we only have github actions to avoid regressions in terms of speed. But it would be cool to have benchmarks with comparisons with other dataset tools ! This would be useful to many people", "Also I would be interested to know what data types `MMapIndexedDataset` supports. Is there some documentation somewhere ?", "no docs haha, it's written to support integer numpy arrays.\r\n\r\nYou can build one in fairseq with, roughly:\r\n```bash\r\n\r\nwget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip\r\nunzip wikitext-103-raw-v1.zip\r\nexport dd=$HOME/fairseq-py/wikitext-103-raw\r\n\r\nexport mm_dir=$HOME/mmap_wikitext2\r\nmkdir -p gpt2_bpe\r\nwget -O gpt2_bpe/encoder.json https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json\r\nwget -O gpt2_bpe/vocab.bpe https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe\r\nwget -O gpt2_bpe/dict.txt https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt\r\nfor SPLIT in train valid; do \\\r\n python -m examples.roberta.multiprocessing_bpe_encoder \\\r\n --encoder-json gpt2_bpe/encoder.json \\\r\n --vocab-bpe gpt2_bpe/vocab.bpe \\\r\n --inputs /scratch/stories_small/${SPLIT}.txt \\\r\n --outputs /scratch/stories_small/${SPLIT}.bpe \\\r\n --keep-empty \\\r\n --workers 60; \\\r\ndone\r\n\r\nmkdir -p $mm_dir\r\nfairseq-preprocess \\\r\n --only-source \\\r\n --srcdict gpt2_bpe/dict.txt \\\r\n --trainpref $dd/wiki.train.bpe \\\r\n --validpref $dd/wiki.valid.bpe \\\r\n --destdir $mm_dir \\\r\n --workers 60 \\\r\n --dataset-impl mmap\r\n```\r\n\r\nI'm noticing in my benchmarking that it's much smaller on disk than arrow (200mb vs 900mb), and that both incur significant cost by increasing the number of data loader workers. \r\nThis somewhat old [post](https://ray-project.github.io/2017/10/15/fast-python-serialization-with-ray-and-arrow.html) suggests there are some gains to be had from using `pyarrow.serialize(array).tobuffer()`. I haven't yet figured out how much of this stuff `pa.Table` does under the hood.\r\n\r\nThe `MMapIndexedDataset` bottlenecks we are working on improving (by using arrow) are:\r\n1) `MMapIndexedDataset`'s index, which stores offsets, basically gets read in its entirety by each dataloading process.\r\n2) we have separate, identical, `MMapIndexedDatasets` on each dataloading worker, so there's redundancy there; we wonder if there is a way that arrow can somehow dedupe these in shared memory.\r\n\r\nIt will take me a few hours to get `MMapIndexedDataset` benchmarks out of `fairseq`/onto a branch in this repo, but I'm happy to invest the time if you're interested in collaborating on some performance hacking." ]
null
[]
benchmarking against MMapIndexedDataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1894/timeline
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens). Questions: 1) Is this (basically identical) performance expected? 2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?) 3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks? Thanks in advance! Sam
https://api.github.com/repos/huggingface/datasets
null
809,609,654
https://api.github.com/repos/huggingface/datasets/issues/1894/comments
MDU6SXNzdWU4MDk2MDk2NTQ=
null
1,894
https://api.github.com/repos/huggingface/datasets/issues/1894/events
false
closed
2021-02-16T18:39:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/1893
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://github.com/huggingface/datasets/issues/1893
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-03-03T17:42:02Z
2021-03-03T17:42:02Z
null
[ "This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?", "Closing since this has been fixed by #1912" ]
completed
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
wmt19 is broken
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1893/timeline
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract return self.extract(self.download(url_or_urls)) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested mapped = [ File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested return function(data_struct) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download return cached_path(url_or_filename, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
https://api.github.com/repos/huggingface/datasets
null
809,556,503
https://api.github.com/repos/huggingface/datasets/issues/1893/comments
MDU6SXNzdWU4MDk1NTY1MDM=
null
1,893
https://api.github.com/repos/huggingface/datasets/issues/1893/events
false
closed
2021-02-16T18:36:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/1892
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://github.com/huggingface/datasets/issues/1892
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-10-26T06:55:42Z
2021-03-25T11:53:23Z
null
[ "Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it should be possible to redistribute the data with no issues.\r\n\r\ncc @patrickvonplaten who knows more about the wmt scripts", "Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download", "I'm downloading them.\r\nI'm starting with the ones hosted on http://data.statmt.org which are the slowest ones", "@lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.)", "Closing since the urls were changed to mirror urls in #1912 ", "Hi there! What about mirroring other datasets like [CCAligned](http://www.statmt.org/cc-aligned/) as well? All of them are really slow to download..." ]
completed
[]
request to mirror wmt datasets, as they are really slow to download
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1892/timeline
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
https://api.github.com/repos/huggingface/datasets
null
809,554,174
https://api.github.com/repos/huggingface/datasets/issues/1892/comments
MDU6SXNzdWU4MDk1NTQxNzQ=
null
1,892
https://api.github.com/repos/huggingface/datasets/issues/1892/events
false
closed
2021-02-16T18:29:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/1891
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://github.com/huggingface/datasets/issues/1891
[]
false
2022-10-05T12:48:38Z
2022-10-05T12:48:38Z
null
[ "This is the current error thrown for missing datasets:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at C:\\Users\\Mario\\Desktop\\projects\\datasets\\missing_dataset\\missing_dataset.py or any data file in the same directory. Couldn't find 'missing_dataset' on the Hugging Face Hub either: FileNotFoundError: Dataset 'missing_dataset' doesn't exist on the Hub. If the repo is private, make sure you are authenticated with `use_auth_token=True` after logging in with `huggingface-cli login`.\r\n```\r\n\r\nSeems much more informative, so I think we can close this issue." ]
completed
[]
suggestion to improve a missing dataset error
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1891/timeline
I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`: ``` True, predict_with_generate=True) Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module raise FileNotFoundError( FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py. The file is also not present on the master branch on github. ``` Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there. The error occured when running: ``` cd examples/seq2seq export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " ``` Thanks.
https://api.github.com/repos/huggingface/datasets
null
809,550,001
https://api.github.com/repos/huggingface/datasets/issues/1891/comments
MDU6SXNzdWU4MDk1NTAwMDE=
null
1,891
https://api.github.com/repos/huggingface/datasets/issues/1891/events
false
closed
2021-02-16T15:11:47Z
null
https://api.github.com/repos/huggingface/datasets/issues/1890
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1890
[]
false
2021-02-16T15:12:34Z
2021-02-16T15:12:33Z
null
[]
null
[]
Reformat dataset cards section titles
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1890/timeline
Titles are formatted like [Foo](#foo) instead of just Foo
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1890.diff", "html_url": "https://github.com/huggingface/datasets/pull/1890", "merged_at": "2021-02-16T15:12:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1890.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1890" }
809,395,586
https://api.github.com/repos/huggingface/datasets/issues/1890/comments
MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx
null
1,890
https://api.github.com/repos/huggingface/datasets/issues/1890/events
true
closed
2021-02-16T12:38:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/1889
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://github.com/huggingface/datasets/pull/1889
[]
false
2021-02-18T18:42:37Z
2021-02-18T18:42:34Z
null
[ "Next step is going to add these two in the documentation ^^" ]
null
[]
Implement to_dict and to_pandas for Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1889/timeline
With options to return a generator or the full dataset
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1889.diff", "html_url": "https://github.com/huggingface/datasets/pull/1889", "merged_at": "2021-02-18T18:42:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/1889.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1889" }
809,276,015
https://api.github.com/repos/huggingface/datasets/issues/1889/comments
MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz
null
1,889
https://api.github.com/repos/huggingface/datasets/issues/1889/events
true
closed
2021-02-16T11:45:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/1888
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1888
[]
false
2021-03-30T14:01:03Z
2021-02-16T11:58:57Z
null
[ "Close #1872" ]
null
[]
Docs for adding new column on formatted dataset
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1888/timeline
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "html_url": "https://github.com/huggingface/datasets/pull/1888", "merged_at": "2021-02-16T11:58:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888" }
809,241,123
https://api.github.com/repos/huggingface/datasets/issues/1888/comments
MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4
null
1,888
https://api.github.com/repos/huggingface/datasets/issues/1888/events
true
closed
2021-02-16T11:27:29Z
null
https://api.github.com/repos/huggingface/datasets/issues/1887
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://github.com/huggingface/datasets/pull/1887
[]
false
2021-02-19T09:41:59Z
2021-02-19T09:41:59Z
null
[ "@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy", "Good catch ! I must be able to fix that one by allowing copies for this kind of arrays.\r\nThis is the kind of surprise you get sometimes when playing with arrow x)", "Raising this error for booleans was introduced in https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 without much explanations unfortunately.\r\nSo \"no copy\" only works for primitive types - except booleans.\r\nThis is confirmed in the source code at https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/array.pxi#L621\r\n\r\nI'm opening a PR to allow copies for booleans...", "I just merged the fix for boolean ArrayXD, feel free to merge from master to see if it fixes the ci :)", "@lhoestq unfirtunately, arrays of strings (or any other non-primitive type) require a copy too\r\n\r\nA list of primitive types can be found here: https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/types.pxi#L821\r\n\r\npyarrow provides a `is_primitive` function to check whether a type is primitive , I used it to set `zero_copy_only`\r\n\r\nAlso, `PandasArrayExtensionArray.isna` was using `numpy.isnan` which fails for arrays of strings. I replaced it with `pandas.isna`. Let me know what you think! :) " ]
null
[]
Implement to_csv for Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1887/timeline
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1887.diff", "html_url": "https://github.com/huggingface/datasets/pull/1887", "merged_at": "2021-02-19T09:41:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1887" }
809,229,809
https://api.github.com/repos/huggingface/datasets/issues/1887/comments
MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy
null
1,887
https://api.github.com/repos/huggingface/datasets/issues/1887/events
true
closed
2021-02-16T11:16:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/1886
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BirgerMoell", "id": 1704131, "login": "BirgerMoell", "node_id": "MDQ6VXNlcjE3MDQxMzE=", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "type": "User", "url": "https://api.github.com/users/BirgerMoell" }
https://github.com/huggingface/datasets/pull/1886
[]
false
2021-03-09T18:51:31Z
2021-03-09T18:51:31Z
null
[ "Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have to figure out how to host them). An even more creative idea would be to host the dataset inside a torrent and figure out a way to download specific datasets from within that torrent.\r\n\r\nHere is some information about the download authorization. They are hosting the data on S3.\r\n\r\nhttps://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html\r\n\r\nHere is an example of how a download link looks.\r\n\r\nhttps://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-6.1-2020-12-11/nl.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3ND4UAQXB%2F20210217%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20210217T080740Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEGIaDCC6ALh%2FwIK9ovvRdCKSBCs5WaSJNsZ2h0SnhpnWFv4yiAJHJTe%2BY6pBcCqadRMs0RABHeQ2n1QDACJ5V9WOqIHfMfT0AI%2Bfe6iFkTGLgRrJOMYpgV%2FmIBcXCjeb72r4ZvudMA8tprkSxZsEh53bJkIDQx1tXqfpz0yoefM0geD3461suEGhHnLIyiwffrUpRg%2BkNZN9%2FLZZXpF5F2pogieKKV533Jetkd1xlWOR%2Bem9R2bENu2RV563XX3JvbWxSYN9IHkVT1xwd4ZiOpUtX7%2F2RoluJUKw%2BUPpyml3J%2FOPPGdr7CyPLjqNxdq9ceRi8lRybty64XvNYZGt45VNTQ3pkTTz4VpUCJAGkgxq95Ve%2BOwW%2Fsc8JtblTFKrH11vej62NB7C0n7JPPS4SLKXHKW%2B7ZbybcNf3BnsAVouPdsGTMslcgkD81b9trnjyXJdOZkzdHUf2KcWVXVceEsZnMhcCZQ1cJpI7qXPEk8QrKCQcNByPLHmPIEdHpj9IrIBKDkl2qO7VX7CCB65WDt2eZRltOcNHXWVFXFktMdQOQztI1j0XSZz2iOX4jPKKaqz193VEytlAqmehNi8pePOnxkP9Z1SP7d3I6rayuBF3phmpHxw499tY3ECYYgoCnJ6QSFa3KxMjFmEpQlmjxuwEMHd4CDL2FJYGcCiIxbCcL1r8ZE3%2BbGdcu7PRsVCHX3Huh%2FqGIaF4h40FgteN6teyKCHKOebs4EGMipb9xmEMZ9ZbVopz4bkhLdMTrjKon9w624Xem0MTPqN7XY%2BB6lRgrW8rd4%3D&X-Amz-Signature=28eabdfce72a472a70b0f9e1e2c37fe1471b5ec8ed60614fbe900bfa97ae1ac8&X-Amz-SignedHeaders=host\r\n\r\nIt could be that we simply need to make a http-request with the right parameters and we can download the datasets.", "> Wow, this looks great already! It's really a difficult dataset so thanks a lot for opening a PR.\r\n> I think the tagging tool is not too important for now and we can take a look at that later!\r\n> \r\n> At the moment, it would be very good to correctly generate some dummy data for all the possible languages. I think the structure of the `.tsv` file as you've noted in the PR is the one we want to use as the structure for `features = datasets.Features(`\r\n> \r\n> The splits `'Train\"`, `\"Test\"`, `\"Validation\"` look great to me! Because this is a special dataset that also has files called `\"Invalidated\"` I think the best option is to also add those as splits, _i.e._ `\"other\"`, `\"invalidated\"`, `\"reported\"`, `\"validated\"` . Those split names can be gives as shown here for example:\r\n> \r\n> https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L124\r\n> \r\n> Also putting @lhoestq in cc here to hear his opinion on the different splits. @lhoestq Common Voicie is a crowd collected dataset where if a collected data sample did not receive enough \"up_votes\" from the community -> then it is (If I understood it correctly) marked as invalid -> hence the file `\"invalidated.tsv\"`. I think this is still useful data, so I would include it what do you think?\r\n> \r\n> @BirgerMoell let me know if you have any more questions :-)\r\n\r\nI think reporting is a separate feature. People can help annotate the data and then they can report things while annotating.\r\nhttps://commonvoice.mozilla.org/sv-SE/listen\r\n\r\nHere is the interface that shows reporting and the thumbs up and down which gives upvotes and downvotes.\r\n<img src=\"https://i.imgur.com/utWjszt.png\" height=\"800px\">\r\n", "I added splits and features. I'm not sure how you want me to generate dummy data for all the languages?", "Hey @BirgerMoell,\r\n\r\nI tweaked your dataset file a bit to have a first working version. To test this dataset downloading script, you can do the following:\r\n\r\n- 1) Download the Common Voice Georgian dataset from https://commonvoice.mozilla.org/en/datasets (It's pretty small which is why I chose it)\r\n- 2) Run the following command using this branch: \r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"./../datasets/datasets/common_voice\", \"Georgian\", data_dir=\"./cv-corpus-6.1-2020-12-11/ka/\", split=\"train\")\r\n```\r\n\r\nNote that I'm loading a local version of the dataset script (`\"./../datasets/datasets/common_voice/\"` points to the folder in your branch) and that I also insert the downloaded data with the `data_dir` arg.\r\n\r\n-> You'll see that the data is correctly loaded and that `ds` contains all the information we need.\r\n\r\nNow there are a lot of different datasets on Common Voice, so it probably takes too much time to test all of those, but maybe you can test whether the current script works as well *e.g.* for Swedish, 3,4 other languages.\r\n\r\nIt would be very nice if we can use the exact same structure for all languages, meaning that we don't have to change the `datasets.Features(...)` structure depending on the language, but can use the exact same one for every language.\r\n\r\nIf everything works as expected we can then go over to cleaning the script and seeing how to add dummy data tests for it." ]
null
[]
Common voice
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1886/timeline
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1886.diff", "html_url": "https://github.com/huggingface/datasets/pull/1886", "merged_at": "2021-03-09T18:51:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/1886.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1886" }
809,221,885
https://api.github.com/repos/huggingface/datasets/issues/1886/comments
MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz
null
1,886
https://api.github.com/repos/huggingface/datasets/issues/1886/events
true
closed
2021-02-15T23:46:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/1885
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://github.com/huggingface/datasets/pull/1885
[]
false
2021-02-16T16:22:19Z
2021-02-16T11:44:12Z
null
[]
null
[]
add missing info on how to add large files
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1885/timeline
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1885.diff", "html_url": "https://github.com/huggingface/datasets/pull/1885", "merged_at": "2021-02-16T11:44:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/1885.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1885" }
808,881,501
https://api.github.com/repos/huggingface/datasets/issues/1885/comments
MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz
null
1,885
https://api.github.com/repos/huggingface/datasets/issues/1885/events
true
closed
2021-02-15T18:55:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/1884
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://github.com/huggingface/datasets/pull/1884
[]
false
2021-07-30T11:01:18Z
2021-07-30T11:01:18Z
null
[]
null
[]
dtype fix when using numpy arrays
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1884/timeline
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1884.diff", "html_url": "https://github.com/huggingface/datasets/pull/1884", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1884.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1884" }
808,755,894
https://api.github.com/repos/huggingface/datasets/issues/1884/comments
MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5
null
1,884
https://api.github.com/repos/huggingface/datasets/issues/1884/events
true
closed
2021-02-15T18:44:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1883
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://github.com/huggingface/datasets/pull/1883
[]
false
2021-02-24T14:54:49Z
2021-02-24T14:53:26Z
null
[ "@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)", "I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.", "Now let's update the documentation to use the new methods x)" ]
null
[]
Add not-in-place implementations for several dataset transforms
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1883/timeline
Should we deprecate in-place versions of such methods?
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1883.diff", "html_url": "https://github.com/huggingface/datasets/pull/1883", "merged_at": "2021-02-24T14:53:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1883.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1883" }
808,750,623
https://api.github.com/repos/huggingface/datasets/issues/1883/comments
MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz
null
1,883
https://api.github.com/repos/huggingface/datasets/issues/1883/events
true
open
2021-02-15T17:36:24Z
null
https://api.github.com/repos/huggingface/datasets/issues/1882
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1882
[]
false
2022-07-06T15:19:47Z
null
null
[ "@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_file.fetch(dst_file)\r\n```\r\n\r\nI have created `RemotePath` (analogue to Path) with method `.open()` that returns `FtpFile`/`HttpFile` (analogue to file-like).\r\n\r\nNow I am going to implement `RemotePath.exists()` method (analogue to the Path's method) to check if remote resource is accessible, using `Ftp/Http.head()`.", "Quick update on this one:\r\nwe discussed offline with @albertvillanova on this PR and I think using `fsspec` can help a lot, since it already implements many parts of the abstraction we need to have nice download tools for both http and ftp (and others !)" ]
null
[]
Create Remote Manager
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1882/timeline
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1882.diff", "html_url": "https://github.com/huggingface/datasets/pull/1882", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1882" }
808,716,576
https://api.github.com/repos/huggingface/datasets/issues/1882/comments
MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw
null
1,882
https://api.github.com/repos/huggingface/datasets/issues/1882/events
true
closed
2021-02-15T14:20:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/1881
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pminervini", "id": 227357, "login": "pminervini", "node_id": "MDQ6VXNlcjIyNzM1Nw==", "organizations_url": "https://api.github.com/users/pminervini/orgs", "received_events_url": "https://api.github.com/users/pminervini/received_events", "repos_url": "https://api.github.com/users/pminervini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "type": "User", "url": "https://api.github.com/users/pminervini" }
https://github.com/huggingface/datasets/pull/1881
[]
false
2021-02-15T15:09:49Z
2021-02-15T15:09:48Z
null
[]
null
[]
`list_datasets()` returns a list of strings, not objects
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1881/timeline
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1881.diff", "html_url": "https://github.com/huggingface/datasets/pull/1881", "merged_at": "2021-02-15T15:09:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1881.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1881" }
808,578,200
https://api.github.com/repos/huggingface/datasets/issues/1881/comments
MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw
null
1,881
https://api.github.com/repos/huggingface/datasets/issues/1881/events
true
closed
2021-02-15T14:00:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/1880
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1880
[]
false
2021-02-15T14:18:19Z
2021-02-15T14:18:18Z
null
[]
null
[]
Update multi_woz_v22 checksums
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1880/timeline
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1880.diff", "html_url": "https://github.com/huggingface/datasets/pull/1880", "merged_at": "2021-02-15T14:18:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1880.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1880" }
808,563,439
https://api.github.com/repos/huggingface/datasets/issues/1880/comments
MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0
null
1,880
https://api.github.com/repos/huggingface/datasets/issues/1880/events
true
closed
2021-02-15T13:29:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/1879
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1879
[]
false
2021-02-19T18:35:14Z
2021-02-19T18:35:14Z
null
[ "Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)" ]
null
[]
Replace flatten_nested
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1879/timeline
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I have also generalized the flattening, and now it handles multiple levels of nesting.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1879.diff", "html_url": "https://github.com/huggingface/datasets/pull/1879", "merged_at": "2021-02-19T18:35:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/1879.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1879" }
808,541,442
https://api.github.com/repos/huggingface/datasets/issues/1879/comments
MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx
null
1,879
https://api.github.com/repos/huggingface/datasets/issues/1879/events
true
closed
2021-02-15T13:10:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/1878
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anton-l", "id": 26864830, "login": "anton-l", "node_id": "MDQ6VXNlcjI2ODY0ODMw", "organizations_url": "https://api.github.com/users/anton-l/orgs", "received_events_url": "https://api.github.com/users/anton-l/received_events", "repos_url": "https://api.github.com/users/anton-l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "type": "User", "url": "https://api.github.com/users/anton-l" }
https://github.com/huggingface/datasets/pull/1878
[]
false
2021-02-15T19:39:41Z
2021-02-15T14:18:09Z
null
[ "Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n\r\n2) That's perfect! Yeah good question - we're currently thinking about a better design with @lhoestq \r\n\r\n3) Again tagging @yjernite & @lhoestq here - guess we should add this license though!", "Thanks @anton-l for adding this one :)\r\nAbout the points you mentioned:\r\n1. Sure as soon as we've updated the tag sets in https://github.com/huggingface/datasets-tagging/blob/main/task_set.json, we can update the tags in this dataset card and also in the other audio dataset card.\r\n2. For now we just try to have them as small as possible but we may switch to S3/LFS at one point indeed\r\n3. If it's not part of the license set at https://github.com/huggingface/datasets-tagging/blob/main/license_set.json we can add it to this license set\r\n\r\nFor now it's ok to have the other-* tags but we'll update them very soon", "Let's merge this one and then we'll update the tags for the audio datasets. We'll probably also add something like this:\r\n```\r\ntype:\r\n- text\r\n- audio\r\n```\r\n\r\nThank you so much for adding this one, good job !" ]
null
[]
Add LJ Speech dataset
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1878/timeline
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list? - Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo? - The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well? Pinging @patrickvonplaten to review
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1878.diff", "html_url": "https://github.com/huggingface/datasets/pull/1878", "merged_at": "2021-02-15T14:18:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1878.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1878" }
808,526,883
https://api.github.com/repos/huggingface/datasets/issues/1878/comments
MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3
null
1,878
https://api.github.com/repos/huggingface/datasets/issues/1878/events
true
closed
2021-02-15T11:39:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/1877
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/issues/1877
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-03-26T16:51:58Z
2021-03-26T16:51:58Z
null
[ "I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that concatenating two tables into one doesn't double the memory used (`total_allocated_bytes()` stays the same).", "Hi @lhoestq @albertvillanova,\r\n\r\nI checked the linked issues and PR, this seems like a great idea. Would you mind elaborating on the in-memory and memory-mapped datasets? \r\nBased on my understanding, it is something like this, please correct me if I am wrong:\r\n1. For in-memory datasets, we don't have any dataset files so the entire dataset is pickled to the cache during loading, and then whenever required it is unpickled .\r\n2. For on-disk/memory-mapped datasets, we have the data files provided, so they can be re-loaded from the paths, and only the file-paths are stored while pickling.\r\n\r\nIf this is correct, will the feature also handle pickling/unpickling of a concatenated dataset? Will this be cached?\r\n\r\nThis also leads me to ask whether datasets are chunked during pickling? \r\n\r\nThanks,\r\nGunjan", "Hi ! Yes you're totally right about your two points :)\r\n\r\nAnd in the case of a concatenated dataset, then we should reload each sub-table depending on whether it's in-memory or memory mapped. That means the dataset will be made of several blocks in order to keep track of what's from memory and what's memory mapped. This allows to pickle/unpickle concatenated datasets", "Hi @lhoestq\r\n\r\nThanks, that sounds nice. Can you explain where the issue of the double memory may arise? Also, why is the existing `concatenate_datasets` not sufficient for this purpose?", "Hi @lhoestq,\r\n\r\nWill the `add_item` feature also help with lazy writing (or no caching) during `map`/`filter`?", "> Can you explain where the issue of the double memory may arise?\r\n\r\nWe have to keep each block (in-memory vs memory mapped) separated in order to be able to reload them with pickle.\r\nOn the other hand we also need to have the full table from mixed in-memory and memory mapped data in order to iterate or extract data conveniently. That means that each block is accessible twice: once in the full table, and once in the separated blocks. But since pyarrow tables concatenation doesn't double the memory, then building the full table doesn't cost memory which is what we want :)\r\n\r\n> Also, why is the existing concatenate_datasets not sufficient for this purpose?\r\n\r\nThe existing `concatenate_datasets` doesn't support having both in-memory and memory mapped data together (there's no fancy block separation logic). It works for datasets fully in-memory or fully memory mapped but not a mix of the two.\r\n\r\n> Will the add_item feature also help with lazy writing (or no caching) during map/filter?\r\n\r\nIt will enable the implementation of the fast, masked filter from this discussion: https://github.com/huggingface/datasets/issues/1949\r\nHowever I don't think this will affect map." ]
completed
[]
Allow concatenation of both in-memory and on-disk datasets
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1877/timeline
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table. Then the dataset would be the concatenation of all these tables. Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data. If you have some ideas you would like to share about the design/API feel free to do so :) cc @albertvillanova
https://api.github.com/repos/huggingface/datasets
null
808,462,272
https://api.github.com/repos/huggingface/datasets/issues/1877/comments
MDU6SXNzdWU4MDg0NjIyNzI=
null
1,877
https://api.github.com/repos/huggingface/datasets/issues/1877/events
false
closed
2021-02-14T19:14:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/1876
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4", "events_url": "https://api.github.com/users/Vincent950129/events{/privacy}", "followers_url": "https://api.github.com/users/Vincent950129/followers", "following_url": "https://api.github.com/users/Vincent950129/following{/other_user}", "gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Vincent950129", "id": 5945326, "login": "Vincent950129", "node_id": "MDQ6VXNlcjU5NDUzMjY=", "organizations_url": "https://api.github.com/users/Vincent950129/orgs", "received_events_url": "https://api.github.com/users/Vincent950129/received_events", "repos_url": "https://api.github.com/users/Vincent950129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions", "type": "User", "url": "https://api.github.com/users/Vincent950129" }
https://github.com/huggingface/datasets/issues/1876
[]
false
2021-08-04T18:08:00Z
2021-08-04T18:08:00Z
null
[ "Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.", "I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll be able to get the new version with\r\n```\r\npip install --upgrade datasets\r\n```", "Hi, I still meet the error when loading the datasets after upgradeing datasets.\r\n\r\nraise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']", "This must be related to https://github.com/budzianowski/multiwoz/pull/72\r\nThose files have changed, let me update the checksums for this dataset.\r\n\r\nFor now you can use `ignore_verifications=True` in `load_dataset` to skip the checksum verification." ]
completed
[]
load_dataset("multi_woz_v22") NonMatchingChecksumError
NONE
https://api.github.com/repos/huggingface/datasets/issues/1876/timeline
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json'] ```
https://api.github.com/repos/huggingface/datasets
null
808,025,859
https://api.github.com/repos/huggingface/datasets/issues/1876/comments
MDU6SXNzdWU4MDgwMjU4NTk=
null
1,876
https://api.github.com/repos/huggingface/datasets/issues/1876/events
false
closed
2021-02-14T04:38:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/1875
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4", "events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}", "followers_url": "https://api.github.com/users/ddhruvkr/followers", "following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}", "gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ddhruvkr", "id": 6061911, "login": "ddhruvkr", "node_id": "MDQ6VXNlcjYwNjE5MTE=", "organizations_url": "https://api.github.com/users/ddhruvkr/orgs", "received_events_url": "https://api.github.com/users/ddhruvkr/received_events", "repos_url": "https://api.github.com/users/ddhruvkr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions", "type": "User", "url": "https://api.github.com/users/ddhruvkr" }
https://github.com/huggingface/datasets/pull/1875
[]
false
2021-02-17T15:56:27Z
2021-02-17T15:56:27Z
null
[]
null
[]
Adding sari metric
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1875/timeline
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1875.diff", "html_url": "https://github.com/huggingface/datasets/pull/1875", "merged_at": "2021-02-17T15:56:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1875.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1875" }
807,887,267
https://api.github.com/repos/huggingface/datasets/issues/1875/comments
MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0
null
1,875
https://api.github.com/repos/huggingface/datasets/issues/1875/events
true
closed
2021-02-13T17:02:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/1874
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello" }
https://github.com/huggingface/datasets/pull/1874
[]
false
2021-03-04T10:38:22Z
2021-03-04T10:38:22Z
null
[ "is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.", "I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos", "I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `<lang1>-<lang2>` 3 files are downloaded:\r\n- dataset for `<lang1>`\r\n- dataset for `<lang2>`\r\n- alignments between `<lang1>` and `<lang2>`\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.", "Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data ./datasets/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `./datasets/europarl_bilingual/dummy/bg-cs/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl/raw/bg/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl/raw/cs/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help", "I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!", "Is there something else I should do? If not can this be integrated?", "Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`" ]
null
[]
Adding Europarl Bilingual dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1874/timeline
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences). I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "html_url": "https://github.com/huggingface/datasets/pull/1874", "merged_at": "2021-03-04T10:38:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874" }
807,786,094
https://api.github.com/repos/huggingface/datasets/issues/1874/comments
MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy
null
1,874
https://api.github.com/repos/huggingface/datasets/issues/1874/events
true
closed
2021-02-13T13:34:27Z
null
https://api.github.com/repos/huggingface/datasets/issues/1873
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
https://github.com/huggingface/datasets/pull/1873
[]
false
2021-02-16T14:21:58Z
2021-02-16T14:21:58Z
null
[]
null
[]
add iapp_wiki_qa_squad
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1873/timeline
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1873.diff", "html_url": "https://github.com/huggingface/datasets/pull/1873", "merged_at": "2021-02-16T14:21:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1873.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1873" }
807,750,745
https://api.github.com/repos/huggingface/datasets/issues/1873/comments
MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy
null
1,873
https://api.github.com/repos/huggingface/datasets/issues/1873/events
true
closed
2021-02-13T09:14:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/1872
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/villmow", "id": 2743060, "login": "villmow", "node_id": "MDQ6VXNlcjI3NDMwNjA=", "organizations_url": "https://api.github.com/users/villmow/orgs", "received_events_url": "https://api.github.com/users/villmow/received_events", "repos_url": "https://api.github.com/users/villmow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "type": "User", "url": "https://api.github.com/users/villmow" }
https://github.com/huggingface/datasets/issues/1872
[]
false
2021-03-30T14:01:45Z
2021-03-30T14:01:45Z
null
[ "Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column to be unformatted you can re-run this line:\r\n```python\r\ndata.set_format(\"torch\", columns=[\"some_integer_column1\", \"some_integer_column2\"], output_all_columns=True)\r\n```", "Hi, thanks that solved my problem. Maybe mention that in the documentation. ", "Ok cool :) \r\nAlso I just did a PR to mention this behavior in the documentation", "Closed by #1888" ]
completed
[]
Adding a new column to the dataset after set_format was called
NONE
https://api.github.com/repos/huggingface/datasets/issues/1872/timeline
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`). Below some pseudo code: ```python def augment_func(sample: Dict) -> Dict: # do something return { "some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor "some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor "NEW_COLUMN": targets, # <-- list of strings } data = datasets.load_dataset(__file__, data_dir="...", split="train") data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True) augmented_dataset = data.map(augment_func, batched=False) for sample in augmented_dataset: print(sample) # fails ``` and the exception: ```python Traceback (most recent call last): File "dataset.py", line 487, in <module> main() File "dataset.py", line 471, in main for sample in augmented_dataset: File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__ yield self._getitem( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem outputs = self._convert_outputs( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) TypeError: new(): invalid data type 'str' ``` Thanks!
https://api.github.com/repos/huggingface/datasets
null
807,711,935
https://api.github.com/repos/huggingface/datasets/issues/1872/comments
MDU6SXNzdWU4MDc3MTE5MzU=
null
1,872
https://api.github.com/repos/huggingface/datasets/issues/1872/events
false
closed
2021-02-13T07:31:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/1871
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/frankier", "id": 299380, "login": "frankier", "node_id": "MDQ6VXNlcjI5OTM4MA==", "organizations_url": "https://api.github.com/users/frankier/orgs", "received_events_url": "https://api.github.com/users/frankier/received_events", "repos_url": "https://api.github.com/users/frankier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "type": "User", "url": "https://api.github.com/users/frankier" }
https://github.com/huggingface/datasets/pull/1871
[]
false
2021-03-08T10:12:45Z
2021-03-08T10:12:45Z
null
[ "Thanks for the changes :)\r\nmerging" ]
null
[]
Add newspop dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1871/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1871.diff", "html_url": "https://github.com/huggingface/datasets/pull/1871", "merged_at": "2021-03-08T10:12:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1871.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1871" }
807,697,671
https://api.github.com/repos/huggingface/datasets/issues/1871/comments
MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz
null
1,871
https://api.github.com/repos/huggingface/datasets/issues/1871/events
true
closed
2021-02-12T15:03:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/1870
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1870
[]
false
2021-04-23T10:01:31Z
2021-04-23T10:01:31Z
{ "closed_at": "2021-05-31T16:20:53Z", "closed_issues": 3, "created_at": "2021-04-09T13:16:31Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-05-14T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/3", "id": 6644287, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "open_issues": 0, "state": "closed", "title": "1.7", "updated_at": "2021-05-31T16:20:53Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/3" }
[ "Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.", "Sure ! I opened an issue #1877 so we can discuss this specific aspect :)", "I am going to implement this consolidation step in #2151.", "Sounds good !", "I retake this PR once the consolidation step is already implemented by #2151." ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Implement Dataset add_item
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1870/timeline
Implement `Dataset.add_item`. Close #1854.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1870.diff", "html_url": "https://github.com/huggingface/datasets/pull/1870", "merged_at": "2021-04-23T10:01:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1870.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1870" }
807,306,564
https://api.github.com/repos/huggingface/datasets/issues/1870/comments
MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4
null
1,870
https://api.github.com/repos/huggingface/datasets/issues/1870/events
true
closed
2021-02-12T11:28:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/1869
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1869
[]
false
2021-02-12T16:13:09Z
2021-02-12T16:13:08Z
null
[]
null
[]
Remove outdated commands in favor of huggingface-cli
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1869/timeline
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1869.diff", "html_url": "https://github.com/huggingface/datasets/pull/1869", "merged_at": "2021-02-12T16:13:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/1869.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1869" }
807,159,835
https://api.github.com/repos/huggingface/datasets/issues/1869/comments
MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy
null
1,869
https://api.github.com/repos/huggingface/datasets/issues/1869/events
true
closed
2021-02-12T10:55:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/1868
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1868
[]
false
2021-02-12T11:03:07Z
2021-02-12T11:03:06Z
null
[]
null
[]
Update oscar sizes
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1868/timeline
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1868.diff", "html_url": "https://github.com/huggingface/datasets/pull/1868", "merged_at": "2021-02-12T11:03:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1868.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1868" }
807,138,159
https://api.github.com/repos/huggingface/datasets/issues/1868/comments
MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0
null
1,868
https://api.github.com/repos/huggingface/datasets/issues/1868/events
true
closed
2021-02-12T10:38:31Z
null
https://api.github.com/repos/huggingface/datasets/issues/1867
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avacaondata", "id": 35173563, "login": "avacaondata", "node_id": "MDQ6VXNlcjM1MTczNTYz", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "repos_url": "https://api.github.com/users/avacaondata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "type": "User", "url": "https://api.github.com/users/avacaondata" }
https://github.com/huggingface/datasets/issues/1867
[]
false
2021-03-01T14:04:24Z
2021-02-24T12:00:43Z
null
[ "Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/src/transformers/trainer.py#L442\r\n\r\nThis line sets the format to not return certain unused columns. But this has two issues:\r\n1. it forgets to also set the format_kwargs (this causes the error you got):\r\n```python\r\ndataset.set_format(type=dataset.format[\"type\"], columns=columns, format_kwargs=dataset.format[\"format_kwargs\"])\r\n```\r\n2. the Trainer wants to keep only the fields that are used as input for a model. However for a dataset with a transform, the output fields are often different from the columns fields. For example from a column \"text\" in the dataset, the strings can be transformed on-the-fly into \"input_ids\". If you want your dataset to only output certain fields and not other you must change your transform function.\r\n", "FYI that option can be removed with `remove_unused_columns = False` in your `TrainingArguments`, so there is a workaround @alexvaca0 while the fix in `Trainer` is underway.\r\n\r\n@lhoestq I think I will just use the line you suggested and if someone is using the columns that are removed in their transform they will need to change `remove_unused_columns` to `False`. We might switch the default of that argument in the next version if that proves too bug-proof.", "I've tried your solutions @sgugger @lhoestq and the good news is that it throws no error. However, TPU training is taking forever, in 1 hour it has only trained 1 batch of 8192 elements, which doesn't make much sense... Is it possible that \"on the fly\" tokenization of batches is slowing down TPU training to that extent?", "I'm pretty sure this is because of padding but @sgugger might know better", "I don't know what the value of `padding` is in your lines of code pasted above so I can't say for sure. The first batch will be very slow on TPU since it compiles everything, so that's normal (1 hour is long but 8192 elements is also large). Then if your batches are not of the same lengths, it will recompile everything at each step instead of using the same graph, which will be very slow, so you should double check you are using padding to make everything the exact same shape. ", "I have tried now on a GPU and it goes smooth! Amazing feature .set_transform() instead of .map()! Now I can pre-train my model without the hard disk limitation. Thanks for your work all HuggingFace team!! :clap: ", "In the end, to make it work I turned to A-100 gpus instead of TPUS, among other changes. Set_transform doesn't work as expected and slows down training very much even in GPUs, and applying map destroys the disk, as it multiplies by 100 the size of the data passed to it (due to inefficient implementation converting strings to int64 floats I guess). For that reason, I chose to use datasets to load the data as text, and then edit the Collator from Transformers to tokenize every batch it receives before processing it. That way, I'm being able to train fast, without memory breaks, without the disk being unnecessarily filled, while making use of GPUs almost all the time I'm paying for them (the map function over the whole dataset took ~15hrs, in which you're not training at all). I hope this info helps others that are looking for training a language model from scratch cheaply, I'm going to close the issue as the optimal solution I found after many experiments to the problem posted in it is explained above. ", "Great comment @alexvaca0 . I think that we could re-open the issue as a reformulation of why it takes so much space to save the arrow. Saving a 1% of oscar corpus takes more thank 600 GB (it breaks when it pass 600GB because it is the free memory that I have at this moment) when the full dataset is 1,3 TB. I have a 1TB M.2 NVMe disk that I can not train on because the saved .arrow files goes crazily big. If you can share your Collator I will be grateful. " ]
completed
[]
ERROR WHEN USING SET_TRANSFORM()
NONE
https://api.github.com/repos/huggingface/datasets/issues/1867/timeline
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional argument: 'transform' [INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text. Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main data_collator=data_collator, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__ self._remove_unused_columns(self.train_dataset, description="training") File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns dataset.set_format(type=dataset.format["type"], columns=columns) File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper out = func(self, *args, **kwargs) File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format _ = get_formatter(type, **format_kwargs) File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter return _FORMAT_TYPES[format_type](**format_kwargs) TypeError: __init__() missing 1 required positional argument: 'transform' ``` The code I'm using: ```{python} def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length) datasets.set_transform(tokenize_function) data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=datasets["train"] if training_args.do_train else None, eval_dataset=datasets["val"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) ``` I've installed from source, master branch.
https://api.github.com/repos/huggingface/datasets
null
807,127,181
https://api.github.com/repos/huggingface/datasets/issues/1867/comments
MDU6SXNzdWU4MDcxMjcxODE=
null
1,867
https://api.github.com/repos/huggingface/datasets/issues/1867/events
false
closed
2021-02-12T07:30:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/1866
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/frankier", "id": 299380, "login": "frankier", "node_id": "MDQ6VXNlcjI5OTM4MA==", "organizations_url": "https://api.github.com/users/frankier/orgs", "received_events_url": "https://api.github.com/users/frankier/received_events", "repos_url": "https://api.github.com/users/frankier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "type": "User", "url": "https://api.github.com/users/frankier" }
https://github.com/huggingface/datasets/pull/1866
[]
false
2021-02-17T14:22:36Z
2021-02-17T14:22:36Z
null
[ "Thanks for the feedback. All accepted and metadata regenerated." ]
null
[]
Add dataset for Financial PhraseBank
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1866/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1866.diff", "html_url": "https://github.com/huggingface/datasets/pull/1866", "merged_at": "2021-02-17T14:22:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/1866.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1866" }
807,017,816
https://api.github.com/repos/huggingface/datasets/issues/1866/comments
MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1
null
1,866
https://api.github.com/repos/huggingface/datasets/issues/1866/events
true
closed
2021-02-11T13:26:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1865
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Valahaar", "id": 19476123, "login": "Valahaar", "node_id": "MDQ6VXNlcjE5NDc2MTIz", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "repos_url": "https://api.github.com/users/Valahaar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "type": "User", "url": "https://api.github.com/users/Valahaar" }
https://github.com/huggingface/datasets/pull/1865
[]
false
2021-02-19T12:38:09Z
2021-02-12T16:59:44Z
null
[ "Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of the dataset script, like \"./datasets/open_subtitles\". Otherwise the dataset is loaded from the master branch on github.\r\nHope that clarifies things a bit\r\n\r\nAnd of course feel free to add methods or classmethods to your builder.\r\n", "Great! Thank you :)\r\nI'll close the issue as well." ]
null
[]
Updated OPUS Open Subtitles Dataset with metadata information
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1865/timeline
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss? Questions: - Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic...
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1865.diff", "html_url": "https://github.com/huggingface/datasets/pull/1865", "merged_at": "2021-02-12T16:59:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/1865.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1865" }
806,388,290
https://api.github.com/repos/huggingface/datasets/issues/1865/comments
MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2
null
1,865
https://api.github.com/repos/huggingface/datasets/issues/1865/events
true
closed
2021-02-11T08:18:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/1864
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
https://github.com/huggingface/datasets/issues/1864
[]
false
2021-02-11T08:19:51Z
2021-02-11T08:19:51Z
null
[ "Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias" ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
Add Winogender Schemas
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1864/timeline
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper:** https://arxiv.org/abs/1804.09301 - **Data:** https://github.com/rudinger/winogender-schemas (see data directory) - **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
806,172,843
https://api.github.com/repos/huggingface/datasets/issues/1864/comments
MDU6SXNzdWU4MDYxNzI4NDM=
null
1,864
https://api.github.com/repos/huggingface/datasets/issues/1864/events
false
open
2021-02-11T08:16:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/1863
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
https://github.com/huggingface/datasets/issues/1863
[]
false
2021-03-07T07:27:13Z
null
null
[ "Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!", "Hi @udapy, are you working on this?" ]
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
Add WikiCREM
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1863/timeline
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **Motivation:** Coreference resolution, common sense reasoning Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
806,171,311
https://api.github.com/repos/huggingface/datasets/issues/1863/comments
MDU6SXNzdWU4MDYxNzEzMTE=
null
1,863
https://api.github.com/repos/huggingface/datasets/issues/1863/events
false
closed
2021-02-10T17:32:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/1862
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1862
[]
false
2021-02-10T18:17:48Z
2021-02-10T18:17:47Z
null
[]
null
[]
Fix writing GPU Faiss index
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1862/timeline
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1862.diff", "html_url": "https://github.com/huggingface/datasets/pull/1862", "merged_at": "2021-02-10T18:17:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1862.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1862" }
805,722,293
https://api.github.com/repos/huggingface/datasets/issues/1862/comments
MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx
null
1,862
https://api.github.com/repos/huggingface/datasets/issues/1862/events
true
closed
2021-02-10T15:44:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/1861
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1861
[]
false
2021-02-10T16:15:00Z
2021-02-10T16:14:59Z
null
[]
null
[]
Fix Limit url
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1861/timeline
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "html_url": "https://github.com/huggingface/datasets/pull/1861", "merged_at": "2021-02-10T16:14:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861" }
805,631,215
https://api.github.com/repos/huggingface/datasets/issues/1861/comments
MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1
null
1,861
https://api.github.com/repos/huggingface/datasets/issues/1861/events
true
closed
2021-02-10T13:24:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/1860
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1860
[]
false
2021-02-12T19:13:30Z
2021-02-12T19:13:29Z
null
[ "I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documentation\r\n\r\nI added a few more tests with the \"lhoestq/test\" dataset I added on the hub and it works fine :) ", "Here is the PR adding support for datasets repos in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/14" ]
null
[]
Add loading from the Datasets Hub + add relative paths in download manager
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1860/timeline
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_dataset("lhoestq/custom_squad") ``` To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via ```python _URLS = { "train": "train-v1.1.json", "dev": "dev-v1.1.json", } downloaded_files = dl_manager.download_and_extract(_URLS) ``` To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url). I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1860.diff", "html_url": "https://github.com/huggingface/datasets/pull/1860", "merged_at": "2021-02-12T19:13:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/1860.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1860" }
805,510,037
https://api.github.com/repos/huggingface/datasets/issues/1860/comments
MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz
null
1,860
https://api.github.com/repos/huggingface/datasets/issues/1860/events
true
closed
2021-02-10T12:41:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/1859
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4", "events_url": "https://api.github.com/users/corticalstack/events{/privacy}", "followers_url": "https://api.github.com/users/corticalstack/followers", "following_url": "https://api.github.com/users/corticalstack/following{/other_user}", "gists_url": "https://api.github.com/users/corticalstack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/corticalstack", "id": 3995321, "login": "corticalstack", "node_id": "MDQ6VXNlcjM5OTUzMjE=", "organizations_url": "https://api.github.com/users/corticalstack/orgs", "received_events_url": "https://api.github.com/users/corticalstack/received_events", "repos_url": "https://api.github.com/users/corticalstack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/corticalstack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/corticalstack/subscriptions", "type": "User", "url": "https://api.github.com/users/corticalstack" }
https://github.com/huggingface/datasets/issues/1859
[]
false
2021-02-10T18:32:12Z
2021-02-10T18:17:47Z
null
[ "Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR", "I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next release of `datasets` (in a few days)", "Thanks for such a quick fix and merge to master, pip installed git master, tested all OK" ]
completed
[]
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
NONE
https://api.github.com/repos/huggingface/datasets/issues/1859/timeline
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_available()` reports: ``` Cuda is available cuda:0 ``` Adding index, device=0 for GPU. `dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)` However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK. ``` def save(self, file: str): """Serialize the FaissIndex on disk""" import faiss # noqa: F811 if ( hasattr(self.faiss_index, "device") and self.faiss_index.device is not None and self.faiss_index.device > -1 ): index = faiss.index_gpu_to_cpu(self.faiss_index) else: index = self.faiss_index faiss.write_index(index, file) ```
https://api.github.com/repos/huggingface/datasets
null
805,479,025
https://api.github.com/repos/huggingface/datasets/issues/1859/comments
MDU6SXNzdWU4MDU0NzkwMjU=
null
1,859
https://api.github.com/repos/huggingface/datasets/issues/1859/events
false
closed
2021-02-10T12:39:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/1858
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1858
[]
false
2021-02-10T15:52:30Z
2021-02-10T15:52:29Z
null
[]
null
[]
Clean config getenvs
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1858/timeline
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1858.diff", "html_url": "https://github.com/huggingface/datasets/pull/1858", "merged_at": "2021-02-10T15:52:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/1858.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1858" }
805,477,774
https://api.github.com/repos/huggingface/datasets/issues/1858/comments
MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx
null
1,858
https://api.github.com/repos/huggingface/datasets/issues/1858/events
true
closed
2021-02-10T10:39:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/1857
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4", "events_url": "https://api.github.com/users/mwrzalik/events{/privacy}", "followers_url": "https://api.github.com/users/mwrzalik/followers", "following_url": "https://api.github.com/users/mwrzalik/following{/other_user}", "gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mwrzalik", "id": 1376337, "login": "mwrzalik", "node_id": "MDQ6VXNlcjEzNzYzMzc=", "organizations_url": "https://api.github.com/users/mwrzalik/orgs", "received_events_url": "https://api.github.com/users/mwrzalik/received_events", "repos_url": "https://api.github.com/users/mwrzalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions", "type": "User", "url": "https://api.github.com/users/mwrzalik" }
https://github.com/huggingface/datasets/issues/1857
[]
false
2021-08-03T05:06:13Z
2021-08-03T05:06:13Z
null
[ "Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c maybe we can make improve the error message ?" ]
completed
[]
Unable to upload "community provided" dataset - 400 Client Error
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1857/timeline
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username Proceed? [Y/n] Y Uploading... This might take a while if files are large 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign huggingface.co migrated to a new model hosting system. You need to upgrade to transformers v3.5+ to upload new models. More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you! ``` I'm using the latest releases of datasets and transformers.
https://api.github.com/repos/huggingface/datasets
null
805,391,107
https://api.github.com/repos/huggingface/datasets/issues/1857/comments
MDU6SXNzdWU4MDUzOTExMDc=
null
1,857
https://api.github.com/repos/huggingface/datasets/issues/1857/events
false
closed
2021-02-10T10:00:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/1856
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4", "events_url": "https://api.github.com/users/yanxi0830/events{/privacy}", "followers_url": "https://api.github.com/users/yanxi0830/followers", "following_url": "https://api.github.com/users/yanxi0830/following{/other_user}", "gists_url": "https://api.github.com/users/yanxi0830/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanxi0830", "id": 19946372, "login": "yanxi0830", "node_id": "MDQ6VXNlcjE5OTQ2Mzcy", "organizations_url": "https://api.github.com/users/yanxi0830/orgs", "received_events_url": "https://api.github.com/users/yanxi0830/received_events", "repos_url": "https://api.github.com/users/yanxi0830/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanxi0830/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanxi0830/subscriptions", "type": "User", "url": "https://api.github.com/users/yanxi0830" }
https://github.com/huggingface/datasets/issues/1856
[]
false
2022-03-15T13:55:24Z
2022-03-15T13:55:23Z
null
[ "Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`", "+1 encountering this issue as well", "@lhoestq Hi! I encounter the same error when loading `yelp_review_full`.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_yp = load_dataset(\"yelp_review_full\")\r\n```\r\n\r\nWhen you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?", "+1 Also encountering this issue", "> When you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?\r\n\r\nEach file on Google Drive can be downloaded only a certain amount of times per day because of a quota. The quota is reset every day. So if too many people download the dataset the same day, then the quota is likely to exceed.\r\nThat's a really bad limitations of Google Drive and we should definitely find another host for these dataset than Google Drive.\r\nFor now I would suggest to wait and try again later..\r\n\r\nSo far the issue happened with CNN DailyMail, Amazon Polarity and Yelp Reviews. \r\nAre you experiencing the issue with other datasets ? @calebchiam @dtch1997 ", "@lhoestq Gotcha, that is quite problematic...for what it's worth, I've had no issues with the other datasets I tried, such as `yelp_reviews_full` and `amazon_reviews_multi`.", "Same issue today with \"big_patent\", though the symptoms are slightly different.\r\n\r\nWhen running\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nload_dataset(\"big_patent\", split=\"validation\")\r\n```\r\n\r\nI get the following\r\n`FileNotFoundError: Local file \\huggingface\\datasets\\downloads\\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\\bigPatentData\\train.tar.gz doesn't exist`\r\n\r\nI had to look into `6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5` (which is a file instead of a folder) and got the following:\r\n\r\n`<!DOCTYPE html><html><head><title>Google Drive - Quota exceeded</title><meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\"/><link href=&#47;static&#47;doclist&#47;client&#47;css&#47;4033072956&#45;untrustedcontent.css rel=\"stylesheet\" nonce=\"JV0t61Smks2TEKdFCGAUFA\"><link rel=\"icon\" href=\"//ssl.gstatic.com/images/branding/product/1x/drive_2020q4_32dp.png\"/><style nonce=\"JV0t61Smks2TEKdFCGAUFA\">#gbar,#guser{font-size:13px;padding-top:0px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\r\n</style><script nonce=\"iNUHigT+ENVQ3UZrLkFtRw\"></script></head><body><div id=gbar><nobr><a target=_blank class=gb1 href=\"https://www.google.fr/webhp?tab=ow\">Search</a> <a target=_blank class=gb1 href=\"http://www.google.fr/imghp?hl=en&tab=oi\">Images</a> <a target=_blank class=gb1 href=\"https://maps.google.fr/maps?hl=en&tab=ol\">Maps</a> <a target=_blank class=gb1 href=\"https://play.google.com/?hl=en&tab=o8\">Play</a> <a target=_blank class=gb1 href=\"https://www.youtube.com/?gl=FR&tab=o1\">YouTube</a> <a target=_blank class=gb1 href=\"https://news.google.com/?tab=on\">News</a> <a target=_blank class=gb1 href=\"https://mail.google.com/mail/?tab=om\">Gmail</a> <b class=gb1>Drive</b> <a target=_blank class=gb1 style=\"text-decoration:none\" href=\"https://www.google.fr/intl/en/about/products?tab=oh\"><u>More</u> &raquo;</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a target=\"_self\" href=\"/settings?hl=en_US\" class=gb4>Settings</a> | <a target=_blank href=\"//support.google.com/drive/?p=web_home&hl=en_US\" class=gb4>Help</a> | <a target=_top id=gb_70 href=\"https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://drive.google.com/uc%3Fexport%3Ddownload%26id%3D1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa&service=writely&ec=GAZAMQ\" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div><div class=\"uc-main\"><div id=\"uc-text\"><p class=\"uc-error-caption\">Sorry, you can&#39;t view or download this file at this time.</p><p class=\"uc-error-subcaption\">Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.</p></div></div><div class=\"uc-footer\"><hr class=\"uc-footer-divider\">&copy; 2021 Google - <a class=\"goog-link\" href=\"//support.google.com/drive/?p=web_home\">Help</a> - <a class=\"goog-link\" href=\"//support.google.com/drive/bin/answer.py?hl=en_US&amp;answer=2450387\">Privacy & Terms</a></div></body></html>`", "A similar issue arises when trying to stream the dataset\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> iter_dset = load_dataset(\"amazon_polarity\", split=\"test\", streaming=True)\r\n>>> iter(iter_dset).__next__()\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in nti(s)\r\n 186 s = nts(s, \"ascii\", \"strict\")\r\n--> 187 n = int(s.strip() or \"0\", 8)\r\n 188 except ValueError:\r\n\r\nValueError: invalid literal for int() with base 8: 'e nonce='\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nInvalidHeaderError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in next(self)\r\n 2288 try:\r\n-> 2289 tarinfo = self.tarinfo.fromtarfile(self)\r\n 2290 except EOFHeaderError as e:\r\n\r\n~\\lib\\tarfile.py in fromtarfile(cls, tarfile)\r\n 1094 buf = tarfile.fileobj.read(BLOCKSIZE)\r\n-> 1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n 1096 obj.offset = tarfile.fileobj.tell() - BLOCKSIZE\r\n\r\n~\\lib\\tarfile.py in frombuf(cls, buf, encoding, errors)\r\n 1036\r\n-> 1037 chksum = nti(buf[148:156])\r\n 1038 if chksum not in calc_chksums(buf):\r\n\r\n~\\lib\\tarfile.py in nti(s)\r\n 188 except ValueError:\r\n--> 189 raise InvalidHeaderError(\"invalid header\")\r\n 190 return n\r\n\r\nInvalidHeaderError: invalid header\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadError Traceback (most recent call last)\r\n<ipython-input-5-6b9058341b2b> in <module>\r\n----> 1 iter(iter_dset).__next__()\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 363\r\n 364 def __iter__(self):\r\n--> 365 for key, example in self._iter():\r\n 366 if self.features:\r\n 367 # we encode the example for ClassLabel feature types for example\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in _iter(self)\r\n 360 else:\r\n 361 ex_iterable = self._ex_iterable\r\n--> 362 yield from ex_iterable\r\n 363\r\n 364 def __iter__(self):\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 77\r\n 78 def __iter__(self):\r\n---> 79 yield from self.generate_examples_fn(**self.kwargs)\r\n 80\r\n 81 def shuffle_data_sources(self, seed: Optional[int]) -> \"ExamplesIterable\":\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\amazon_polarity\\56923eeb72030cb6c4ea30c8a4e1162c26b25973475ac1f44340f0ec0f2936f4\\amazon_polarity.py in _generate_examples(self, filepath, files)\r\n 114 def _generate_examples(self, filepath, files):\r\n 115 \"\"\"Yields examples.\"\"\"\r\n--> 116 for path, f in files:\r\n 117 if path == filepath:\r\n 118 lines = (line.decode(\"utf-8\") for line in f)\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in __iter__(self)\r\n 616\r\n 617 def __iter__(self):\r\n--> 618 yield from self.generator(*self.args, **self.kwargs)\r\n 619\r\n 620\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_urlpath(cls, urlpath, use_auth_token)\r\n 644 ) -> Generator[Tuple, None, None]:\r\n 645 with xopen(urlpath, \"rb\", use_auth_token=use_auth_token) as f:\r\n--> 646 yield from cls._iter_from_fileobj(f)\r\n 647\r\n 648 @classmethod\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_fileobj(cls, f)\r\n 624 @classmethod\r\n 625 def _iter_from_fileobj(cls, f) -> Generator[Tuple, None, None]:\r\n--> 626 stream = tarfile.open(fileobj=f, mode=\"r|*\")\r\n 627 for tarinfo in stream:\r\n 628 file_path = tarinfo.name\r\n\r\n~\\lib\\tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)\r\n 1603 stream = _Stream(name, filemode, comptype, fileobj, bufsize)\r\n 1604 try:\r\n-> 1605 t = cls(name, filemode, stream, **kwargs)\r\n 1606 except:\r\n 1607 stream.close()\r\n\r\n~\\lib\\tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)\r\n 1484 if self.mode == \"r\":\r\n 1485 self.firstmember = None\r\n-> 1486 self.firstmember = self.next()\r\n 1487\r\n 1488 if self.mode == \"a\":\r\n\r\n~\\lib\\tarfile.py in next(self)\r\n 2299 continue\r\n 2300 elif self.offset == 0:\r\n-> 2301 raise ReadError(str(e))\r\n 2302 except EmptyHeaderError:\r\n 2303 if self.offset == 0:\r\n\r\nReadError: invalid header\r\n\r\n```", "This error still happens, but for a different reason now: Google Drive returns a warning instead of the dataset.", "Met the same issue +1", "Hi ! Thanks for reporting. Google Drive changed the way to bypass the warning message recently.\r\n\r\nThe latest release `1.18.4` fixes this for datasets loaded in a regular way.\r\n\r\nWe opened a PR to fix this recently for streaming mode at #3843 - we'll do a new release once the fix is merged :)", "Fixed by:\r\n- #3787 \r\n- #3843" ]
completed
[]
load_dataset("amazon_polarity") NonMatchingChecksumError
NONE
https://api.github.com/repos/huggingface/datasets/issues/1856/timeline
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-8559a03fe0f8> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ```
https://api.github.com/repos/huggingface/datasets
null
805,360,200
https://api.github.com/repos/huggingface/datasets/issues/1856/comments
MDU6SXNzdWU4MDUzNjAyMDA=
null
1,856
https://api.github.com/repos/huggingface/datasets/issues/1856/events
false
closed
2021-02-10T07:27:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/1855
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1855
[]
false
2021-02-10T12:33:09Z
2021-02-10T12:33:09Z
null
[]
null
[]
Minor fix in the docs
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1855/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "html_url": "https://github.com/huggingface/datasets/pull/1855", "merged_at": "2021-02-10T12:33:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855" }
805,256,579
https://api.github.com/repos/huggingface/datasets/issues/1855/comments
MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3
null
1,855
https://api.github.com/repos/huggingface/datasets/issues/1855/events
true
closed
2021-02-10T06:06:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/1854
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://github.com/huggingface/datasets/issues/1854
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2021-04-23T10:01:30Z
2021-04-23T10:01:30Z
null
[ "Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\nds = Dataset.from_dict(data)\r\nassert (ds[\"input_ids\"][0] == np.array([4,4,2])).all()\r\n```", "Hi @sshleifer :) \r\n\r\nWe don't have methods like `Dataset.add_batch` or `Dataset.add_entry/add_item` yet.\r\nBut that's something we'll add pretty soon. Would an API that looks roughly like this help ? Do you have suggestions ?\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\ntokenized = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])\r\n\r\n# API suggestion (not available yet)\r\nd = Dataset()\r\nfor input_ids in tokenized:\r\n d.add_item({\"input_ids\": input_ids})\r\n\r\nprint(d[0][\"input_ids\"])\r\n# [4, 4, 2]\r\n```\r\n\r\nCurrently you can define a dataset with what @albertvillanova suggest, or via a generator using dataset builders. It's also possible to [concatenate datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets).", "Your API looks perfect @lhoestq, thanks!" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Feature Request: Dataset.add_item
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1854/timeline
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`. Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries. ### Desired API ```python import numpy as np tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5]) def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset: """FIXME""" dataset = EmptyDataset() for t in tokenized: dataset.append(t) return dataset ds = build_dataset_from_tokenized(tokenized) assert (ds[0] == np.array([4,4,2])).all() ``` ### What I tried grep, google for "add one entry at a time", "datasets.append" ### Current Code This code achieves the same result but doesn't fit into the `add_item` abstraction. ```python dataset = load_dataset('text', data_files={'train': 'train.txt'}) tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096) def tokenize_function(examples): ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids'] return {'input_ids': [x[1:] for x in ids]} ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache) print(ds['train'][0]) => np array ``` Thanks in advance!
https://api.github.com/repos/huggingface/datasets
null
805,204,397
https://api.github.com/repos/huggingface/datasets/issues/1854/comments
MDU6SXNzdWU4MDUyMDQzOTc=
null
1,854
https://api.github.com/repos/huggingface/datasets/issues/1854/events
false
closed
2021-02-09T18:11:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/1853
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1853
[]
false
2021-02-10T12:32:34Z
2021-02-10T12:32:34Z
null
[]
null
[]
Configure library root logger at the module level
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1853/timeline
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1853.diff", "html_url": "https://github.com/huggingface/datasets/pull/1853", "merged_at": "2021-02-10T12:32:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/1853.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1853" }
804,791,166
https://api.github.com/repos/huggingface/datasets/issues/1853/comments
MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4
null
1,853
https://api.github.com/repos/huggingface/datasets/issues/1853/events
true
closed
2021-02-09T15:02:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1852
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaidalyafeai", "id": 15667714, "login": "zaidalyafeai", "node_id": "MDQ6VXNlcjE1NjY3NzE0", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "type": "User", "url": "https://api.github.com/users/zaidalyafeai" }
https://github.com/huggingface/datasets/pull/1852
[]
false
2021-02-11T10:18:55Z
2021-02-11T10:18:55Z
null
[]
null
[]
Add Arabic Speech Corpus
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1852/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1852.diff", "html_url": "https://github.com/huggingface/datasets/pull/1852", "merged_at": "2021-02-11T10:18:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/1852.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1852" }
804,633,033
https://api.github.com/repos/huggingface/datasets/issues/1852/comments
MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1
null
1,852
https://api.github.com/repos/huggingface/datasets/issues/1852/events
true
closed
2021-02-09T12:51:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/1851
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4", "events_url": "https://api.github.com/users/pvl/events{/privacy}", "followers_url": "https://api.github.com/users/pvl/followers", "following_url": "https://api.github.com/users/pvl/following{/other_user}", "gists_url": "https://api.github.com/users/pvl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pvl", "id": 3596, "login": "pvl", "node_id": "MDQ6VXNlcjM1OTY=", "organizations_url": "https://api.github.com/users/pvl/orgs", "received_events_url": "https://api.github.com/users/pvl/received_events", "repos_url": "https://api.github.com/users/pvl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvl/subscriptions", "type": "User", "url": "https://api.github.com/users/pvl" }
https://github.com/huggingface/datasets/pull/1851
[]
false
2021-02-09T14:21:48Z
2021-02-09T14:21:48Z
null
[]
null
[]
set bert_score version dependency
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1851/timeline
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "html_url": "https://github.com/huggingface/datasets/pull/1851", "merged_at": "2021-02-09T14:21:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851" }
804,523,174
https://api.github.com/repos/huggingface/datasets/issues/1851/comments
MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5
null
1,851
https://api.github.com/repos/huggingface/datasets/issues/1851/events
true
closed
2021-02-09T10:22:08Z
null
https://api.github.com/repos/huggingface/datasets/issues/1850
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggdupont", "id": 5583410, "login": "ggdupont", "node_id": "MDQ6VXNlcjU1ODM0MTA=", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "repos_url": "https://api.github.com/users/ggdupont/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "type": "User", "url": "https://api.github.com/users/ggdupont" }
https://github.com/huggingface/datasets/pull/1850
[]
false
2021-02-09T15:16:26Z
2021-02-09T15:16:26Z
null
[ "Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129", "@lhoestq FYI", "Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today", "Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging" ]
null
[]
Add cord 19 dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### Extras: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "html_url": "https://github.com/huggingface/datasets/pull/1850", "merged_at": "2021-02-09T15:16:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850" }
804,412,249
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
null
1,850
https://api.github.com/repos/huggingface/datasets/issues/1850/events
true
closed
2021-02-09T07:29:41Z
null
https://api.github.com/repos/huggingface/datasets/issues/1849
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1849
[]
false
2021-03-15T05:59:37Z
2021-03-15T05:59:37Z
null
[ "@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n", "Hey @vrindaprabhu - sure I'll help you :-) Could you open a first PR for TIMIT where you copy-paste more or less the `librispeech_asr` script: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L93 (obviously replacing all the naming and links correctly...) and then you can list all possible outputs in the features dict: https://github.com/huggingface/datasets/blob/28be129db862ec89a87ac9349c64df6b6118aff4/datasets/librispeech_asr/librispeech_asr.py#L104 (words, phonemes should probably be of kind `datasets.Sequence(datasets.Value(\"string\"))` and texts I think should be of type `\"text\": datasets.Value(\"string\")`.\r\n\r\nWhen you've opened a first PR, I think it'll be much easier for us to take a look together :-) ", "I am sorry! I created the PR [#1903](https://github.com/huggingface/datasets/pull/1903#). Requesting your comments! CircleCI tests are failing, will address them along with your comments!" ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add TIMIT
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1849/timeline
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT - **Data:** *https://deepai.org/dataset/timit* - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
804,292,971
https://api.github.com/repos/huggingface/datasets/issues/1849/comments
MDU6SXNzdWU4MDQyOTI5NzE=
null
1,849
https://api.github.com/repos/huggingface/datasets/issues/1849/events
false
closed
2021-02-08T18:43:51Z
null
https://api.github.com/repos/huggingface/datasets/issues/1848
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1848
[]
false
2021-02-10T12:29:35Z
2021-02-10T12:29:35Z
null
[]
null
[]
Refactoring: Create config module
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1848/timeline
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "html_url": "https://github.com/huggingface/datasets/pull/1848", "merged_at": "2021-02-10T12:29:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848" }
803,826,506
https://api.github.com/repos/huggingface/datasets/issues/1848/comments
MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1
null
1,848
https://api.github.com/repos/huggingface/datasets/issues/1848/events
true
closed
2021-02-08T18:41:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/1847
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/pull/1847
[]
false
2021-02-09T17:53:21Z
2021-02-09T17:53:21Z
null
[ "Feel free to merge once the CI is all green ;)" ]
null
[]
[Metrics] Add word error metric metric
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1847/timeline
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "html_url": "https://github.com/huggingface/datasets/pull/1847", "merged_at": "2021-02-09T17:53:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847" }
803,824,694
https://api.github.com/repos/huggingface/datasets/issues/1847/comments
MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0
null
1,847
https://api.github.com/repos/huggingface/datasets/issues/1847/events
true
closed
2021-02-08T18:14:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/1846
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1846
[]
false
2021-02-25T14:10:18Z
2021-02-25T14:10:18Z
null
[ "First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...", "There could be several situations:\r\n- download a file with no extraction\r\n- download a file and extract it\r\n- download a file, extract it and then inside the output folder extract some more files\r\n- extract a local file (for datasets with data that are manually downloaded for example)\r\n- extract a local file, and then inside the output folder extract some more files\r\n\r\nSo I think it's ok to have `downloaded_paths` as a dict url -> downloaded_path and `extracted_paths` as a dict local_path -> extracted_path.", "OK. I am refactoring this. I have opened #1879, as an intermediate step..." ]
null
[]
Make DownloadManager downloaded/extracted paths accessible
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1846/timeline
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1846.diff", "html_url": "https://github.com/huggingface/datasets/pull/1846", "merged_at": "2021-02-25T14:10:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1846.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1846" }
803,806,380
https://api.github.com/repos/huggingface/datasets/issues/1846/comments
MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy
null
1,846
https://api.github.com/repos/huggingface/datasets/issues/1846/events
true
closed
2021-02-08T16:22:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/1845
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1845
[]
false
2021-02-09T14:22:38Z
2021-02-09T14:22:37Z
null
[ "Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- it is the end user who has to implement any custom handlers\r\n- indeed, the previous logging problem with TensorFlow was due to the fact that absl did not follow best practices and had implemented a custom handler\r\n\r\nOur errors/warnings will be displayed anyway, even if we do not implement any custom handler. Since Python 3.2, logging has a built-in \"default\" handler (logging.lastResort) with the expected default behavior (sending error/warning messages to sys.stderr), which is used only if the end user has not configured any custom handler." ]
null
[]
Enable logging propagation and remove logging handler
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1845/timeline
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library): > It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements. It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management. Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`. cc @albertvillanova this should let you use capsys/caplog in pytest cc @LysandreJik @sgugger if you want to do the same in `transformers`
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1845.diff", "html_url": "https://github.com/huggingface/datasets/pull/1845", "merged_at": "2021-02-09T14:22:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/1845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1845" }
803,714,493
https://api.github.com/repos/huggingface/datasets/issues/1845/comments
MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz
null
1,845
https://api.github.com/repos/huggingface/datasets/issues/1845/events
true
closed
2021-02-08T13:55:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/1844
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Valahaar", "id": 19476123, "login": "Valahaar", "node_id": "MDQ6VXNlcjE5NDc2MTIz", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "repos_url": "https://api.github.com/users/Valahaar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "type": "User", "url": "https://api.github.com/users/Valahaar" }
https://github.com/huggingface/datasets/issues/1844
[]
false
2021-02-12T17:38:58Z
2021-02-12T17:38:58Z
null
[ "Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L103)", "Hey @lhoestq , absolutely yes! Just one question before I start implementing. The ids found in the zip file have this format: \r\n(the following is line `22497315` of the `ids` file of the `de-en` dump)\r\n\r\n\r\n`de/2017/7006210/7063319.xml.gz en/2017/7006210/7050201.xml.gz 335 339 340` (every space is actually a tab, aside from the space between `339` and `340`)\r\n\r\n\r\nWhere filenames encode the information like this: `lang/year/imdb_id/opensubtitles_id.xml.gz` whereas the numbers correspond to the sentence ids which are linked together (i.e. sentence `335` of the German subtitle corresponds to lines `339` and `340` of the English file)\r\n\r\nThat being said, do you think I should stick to the raw sentence id (and replace the current sequential id) or should I include more detailed metadata (or both things maybe)?\r\n\r\nGoing with raw ID is surely simpler, but including `year`, `imdbId` and `subtitleId` should save space as they're just integers; besides, any operation (like filtering or grouping) will be much easier if users don't have to manually parse the ids every time.\r\nAs for the language-specific sentenceIds, what could be the best option? A list of integers or a comma-separated string?\r\n\r\n**Note:** I did not find any official information about this encoding, but it appears to check out:\r\nhttps://www.imdb.com/title/tt7006210/, https://www.opensubtitles.org/en/subtitles/7063319 and https://www.opensubtitles.org/en/subtitles/7050201 all link to the same episode, so I guess (I hope!) it's correct.\r\n\r\n", "I like the idea of having `year`, `imdbId` and `subtitleId` as columns for filtering for example.\r\nAnd for the `sentenceIds` a list of integers is fine.", "Thanks for improving it @Valahaar :) ", "Something like this? (adapted from [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles/open_subtitles.py#L114))\r\n\r\n```python\r\nresult = (\r\n sentence_counter,\r\n {\r\n \"id\": str(sentence_counter),\r\n \"meta\": {\r\n \"year\": year,\r\n \"imdbId\": imdb_id,\r\n \"subtitleId\": {l1: l1_sub_id, l2: l2_sub_id},\r\n \"sentenceIds\": {l1: [... source_sids ...], l2: [... target_sids ...]},\r\n # or maybe src/tgt? I'd go with the first one for consistency with 'translation'\r\n \"subtitleId\": {\"src\": l1_sub_id, \"tgt\": l2_sub_id},\r\n \"sentenceIds\": {\"src\": [... source_sids ...], \"tgt\": [... target_sids ...]},\r\n },\r\n \"translation\": {l1: x, l2: y},\r\n },\r\n )\r\n```\r\nOr at top level, avoiding nesting into 'meta'?", "Merged in #1865, closing. Thanks :)" ]
completed
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
Update Open Subtitles corpus with original sentence IDs
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1844/timeline
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts. I think I should tag @abhishekkrthakur as he's the one who added it in the first place. Thanks!
https://api.github.com/repos/huggingface/datasets
null
803,588,125
https://api.github.com/repos/huggingface/datasets/issues/1844/comments
MDU6SXNzdWU4MDM1ODgxMjU=
null
1,844
https://api.github.com/repos/huggingface/datasets/issues/1844/events
false
open
2021-02-08T13:27:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/1843
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1843
[]
false
2021-05-14T14:53:34Z
null
null
[ "Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ", "That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `datasets/MuST-C` instead?\r\n\r\nDescription: \r\n_MuST-C is a multilingual speech translation corpus whose size and quality facilitates the training of end-to-end systems for speech translation from English into several languages. For each target language, MuST-C comprises several hundred hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations._\r\n\r\nPaper: https://www.aclweb.org/anthology/N19-1202.pdf\r\n\r\nDataset: https://ict.fbk.eu/must-c/ (One needs to fill out a short from to download the data, but it's very easy).\r\n\r\nIt would be awesome if you're interested in adding this datates. I'm very happy to guide you through the PR! I think the easiest way to start would probably be to read [this README on how to add a dataset](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md) and open a PR. Think you can copy & paste some code from:\r\n\r\n- Librispeech_asr: https://github.com/huggingface/datasets/blob/master/datasets/librispeech_asr/librispeech_asr.py\r\n- Flores Translation: https://github.com/huggingface/datasets/blob/master/datasets/flores/flores.py\r\n\r\nThink all the rest can be handled on the PR :-) ", "Hi @patrickvonplaten \r\nI have tried downloading this dataset, but the connection seems to reset all the time. I have tried it via the browser, wget, and using gdown . But it gives me an error message. _\"The server is busy or down, pls try again\"_ (rephrasing the message here)\r\n\r\nI have completed adding 4 datasets in the previous data sprint (including the IWSLT dataset #1676 ) ...so just checking if you are able to download it at your end. Otherwise will write to the dataset authors to update the links. \r\n\r\n\r\n\r\n\r\n", "Let me check tomorrow! Thanks for leaving this message!", "cc @patil-suraj for notification ", "@skyprince999, I think I'm getting the same error you're getting :-/\r\n\r\n```\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nIt would be great if you could write the authors to see whether they can fix it.\r\nAlso cc @lhoestq - do you think we could mirror the dataset? ", "Also there are huge those datasets. Think downloading MuST-C v1.2 amounts to ~ 1000GB... because there are 14 possible configs each around 60-70GB. I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in `datasets` no? cc @lhoestq ", "> Also cc @lhoestq - do you think we could mirror the dataset?\r\n\r\nYes we can mirror it if the authors are fine with it. You can create a dataset repo on huggingface.co (possibly under the relevant org) and add the mirrored data files.\r\n\r\n> I think users mostly will only use one of the 14 configs so that they would only need, in theory, will have to download ~60GB which is ok. But I think this functionality doesn't exist yet in datasets no? cc @lhoestq\r\n\r\nIf there are different download links for each configuration we can make the dataset builder download only the files related to the requested configuration.", "I have written to the dataset authors, highlighting this issue. Waiting for their response. \r\n\r\nUpdate on 25th Feb: \r\nThe authors have replied back, they are updating the download link and will revert back shortly! \r\n\r\n```\r\nfirst of all thanks a lot for being interested in MuST-C and for building the data-loader.\r\n\r\nBefore answering your request, I'd like to clarify that the creation, maintenance, and expansion of MuST-c are not supported by any funded project, so this means that we need to find economic support for all these activities. This also includes permanently moving all the data to AWS or GCP. We are working at this with the goal of facilitating the use of MuST-C, but this is not something that can happen today. We hope to have some news ASAP and you will be among the first to be informed.\r\n\r\nI hope you understand our situation.\r\n```\r\n\r\n", "Awesome, actually @lhoestq let's just ask the authors if we should host the dataset no? They could just use our links then as well for their website - what do you think? Is it fine to use our AWS dataset storage also as external links? ", "Yes definitely. Shall we suggest them to create a dataset repository under their org on huggingface.co ? @julien-c \r\nThe dataset is around 1TB", "Sounds good! \r\n\r\nOrder of magnitude is storage costs ~$20 per TB per month (not including bandwidth). \r\n\r\nHappy to provide this to the community as I feel this is an important dataset. Let us know what the authors want to do!\r\n\r\n", "Great! @skyprince999, do you think you could ping the authors here or link to this thread? I think it could be a cool idea to host the dataset on our side then", "Done. They replied back, and they want to have a call over a meet/ skype. Is that possible ? \r\nBtw @patrickvonplaten you are looped in that email (_pls check you gmail account_) ", "Hello! Any news on this?", "@gegallego there were some concerns regarding dataset usage & attribution by a for-profit company, so couldn't take it forward. Also the download links were unstable. \r\nBut I guess if you want to test the fairseq benchmarks, you can connect with them directly for downloading the dataset. ", "Yes, that dataset is not easy to download... I had to copy it to my Google Drive and use `rsync` to be able to download it.\r\nHowever, we could add the dataset with a manual download, right?", "yes that is possible. I couldn't unfortunately complete this PR, If you would like to add it, please feel free to do it. " ]
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
MustC Speech Translation
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1843/timeline
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - all data under "Allowed Training Data" and "Development and Evalutaion Data for TED/How2" - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,565,393
https://api.github.com/repos/huggingface/datasets/issues/1843/comments
MDU6SXNzdWU4MDM1NjUzOTM=
null
1,843
https://api.github.com/repos/huggingface/datasets/issues/1843/events
false
closed
2021-02-08T13:25:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/1842
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1842
[]
false
2023-02-28T16:29:22Z
2023-02-28T16:29:22Z
null
[ "Available here: ~https://huggingface.co/datasets/ami~ https://huggingface.co/datasets/edinburghcstr/ami", "@mariosasko actually the \"official\" AMI dataset can be found here: https://huggingface.co/datasets/edinburghcstr/ami -> the old one under `datasets/ami` doesn't work and should be deleted. \r\n\r\nThe new one was tested by fine-tuning a Wav2Vec2 model on it + we uploaded all the processed audio directly into it", "@patrickvonplaten Thanks for correcting me! I've updated the link." ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add AMI Corpus
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1842/timeline
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ - **Data:** *http://groups.inf.ed.ac.uk/ami/download/* - Select all cases in 1) and select "Individual Headsets" & "Microphone array" for 2) - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,563,149
https://api.github.com/repos/huggingface/datasets/issues/1842/comments
MDU6SXNzdWU4MDM1NjMxNDk=
null
1,842
https://api.github.com/repos/huggingface/datasets/issues/1842/events
false
closed
2021-02-08T13:22:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1841
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1841
[]
false
2021-03-15T05:59:02Z
2021-03-15T05:59:02Z
null
[]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add ljspeech
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1841/timeline
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)* - **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/ - **Data:** *https://keithito.com/LJ-Speech-Dataset/* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,561,123
https://api.github.com/repos/huggingface/datasets/issues/1841/comments
MDU6SXNzdWU4MDM1NjExMjM=
null
1,841
https://api.github.com/repos/huggingface/datasets/issues/1841/events
false
closed
2021-02-08T13:21:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/1840
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1840
[]
false
2022-03-20T15:23:40Z
2021-03-15T05:56:21Z
null
[ "I have started working on adding this dataset.", "Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the download link somehow from the XLM tree of the website \r\n2) If this doesn't work we force the user to download the data himself and add a `\"data_dir\"` as an input parameter. E.g. you can take a look at how it is done for [this](https://github.com/huggingface/datasets/blob/66f2a7eece98d2778bd22bb5034cb7c2376032d4/datasets/arxiv_dataset/arxiv_dataset.py#L66) \r\n\r\nAlso the documentation here: https://huggingface.co/docs/datasets/add_dataset.html?highlight=data_dir#downloading-data-files-and-organizing-splits (especially the \"note\") might be helpful.", "Let me know if you have any other questions", "I added a Work in Progress pull request (hope that is ok). I've made a card for the dataset and filled out the common_voice.py file with information about the datset (not completely).\r\n\r\nI didn't manage to get the tagging tool working locally on my machine but will look into that later.\r\n\r\nLeft to do.\r\n\r\n- Tag the dataset\r\n- Add missing information and update common_voice.py\r\n\r\nhttps://github.com/huggingface/datasets/pull/1886", "Awesome! I left a longer comment on the PR :-)", "I saw that this current datasets package holds common voice version 6.1, how to add the new version 7.0 that is already available?", "Will me merged next week - we're working on it :-)", "Common voice still appears to be a 6.1. Is the plan still to upgrade to 7.0?", "We actually already have the code and everything ready to add Common Voice 7.0 to `datasets` but are still waiting for the common voice authors to give us the green light :-) \r\n\r\nAlso gently pinging @phirework and @milupo here", "Common Voice 7.0 is available here now: https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0", "For anyone else stumbling upon this thread, the 8.0 version is also available now: https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0" ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add common voice
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1840/timeline
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,560,039
https://api.github.com/repos/huggingface/datasets/issues/1840/comments
MDU6SXNzdWU4MDM1NjAwMzk=
null
1,840
https://api.github.com/repos/huggingface/datasets/issues/1840/events
false
open
2021-02-08T13:19:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/1839
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1839
[]
false
2021-02-08T13:28:31Z
null
null
[]
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add Voxforge
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1839/timeline
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.* - **Paper:** *Homepage*: http://www.voxforge.org/ - **Data:** *http://www.voxforge.org/home/downloads* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/voxforge If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,559,164
https://api.github.com/repos/huggingface/datasets/issues/1839/comments
MDU6SXNzdWU4MDM1NTkxNjQ=
null
1,839
https://api.github.com/repos/huggingface/datasets/issues/1839/events
false
closed
2021-02-08T13:17:52Z
null
https://api.github.com/repos/huggingface/datasets/issues/1838
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1838
[]
false
2022-10-04T14:34:12Z
2022-10-04T14:34:12Z
null
[ "Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0", "Resolved via https://github.com/huggingface/datasets/pull/4309" ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add tedlium
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1838/timeline
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51/ - **Data:** http://www.openslr.org/7/ - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/tedlium If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,557,521
https://api.github.com/repos/huggingface/datasets/issues/1838/comments
MDU6SXNzdWU4MDM1NTc1MjE=
null
1,838
https://api.github.com/repos/huggingface/datasets/issues/1838/events
false
closed
2021-02-08T13:15:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/1837
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1837
[]
false
2021-12-28T15:05:08Z
2021-12-28T15:05:08Z
null
[ "@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me know! Otherwise, I'll try to write up a PR in the coming days.", "That sounds great @jaketae - let me know if you need any help i.e. feel free to ping me on a first PR :-)" ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add VCTK
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1837/timeline
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.* - **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443 - **Data:** https://datashare.ed.ac.uk/handle/10283/3443 - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,555,650
https://api.github.com/repos/huggingface/datasets/issues/1837/comments
MDU6SXNzdWU4MDM1NTU2NTA=
null
1,837
https://api.github.com/repos/huggingface/datasets/issues/1837/events
false
closed
2021-02-08T12:45:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/1836
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Paethon", "id": 237550, "login": "Paethon", "node_id": "MDQ6VXNlcjIzNzU1MA==", "organizations_url": "https://api.github.com/users/Paethon/orgs", "received_events_url": "https://api.github.com/users/Paethon/received_events", "repos_url": "https://api.github.com/users/Paethon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "type": "User", "url": "https://api.github.com/users/Paethon" }
https://github.com/huggingface/datasets/issues/1836
[]
false
2021-02-10T16:14:58Z
2021-02-10T16:14:58Z
null
[ "Thanks for the heads up ! I'm opening a PR to fix that" ]
completed
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
test.json has been removed from the limit dataset repo (breaks dataset)
NONE
https://api.github.com/repos/huggingface/datasets/issues/1836/timeline
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd8848f0f11527c77dcf168fefd2b23/data`
https://api.github.com/repos/huggingface/datasets
null
803,531,837
https://api.github.com/repos/huggingface/datasets/issues/1836/comments
MDU6SXNzdWU4MDM1MzE4Mzc=
null
1,836
https://api.github.com/repos/huggingface/datasets/issues/1836/events
false
open
2021-02-08T12:36:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/1835
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/1835
[]
false
2024-02-01T10:25:03Z
null
null
[ "@patrickvonplaten not sure whether it is still needed, but willing to tackle this issue", "Hey @patrickvonplaten, I have managed to download the zip on [here]( http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/download.html) and successfully uploaded all the files on a hugging face dataset: \r\n\r\nhttps://huggingface.co/datasets/ksbai123/Chime4\r\n\r\nHowever I am getting this error when trying to use the dataset viewer:\r\n\r\n![Screenshot 2023-12-27 at 18 40 59](https://github.com/huggingface/datasets/assets/35923560/a5a9ed3d-8dbd-41c4-b83a-4e80728b1450)\r\n\r\nCan you take a look and let me know if I have missed any files please", "@patrickvonplaten ?", "Hi @KossaiSbai,\r\n\r\nThanks for your contribution.\r\n\r\nAs the issue is not strictly related to the `datasets` library, but to the specific implementation of the CHiME4 dataset, I have opened an issue in the Discussion tab of the dataset: https://huggingface.co/datasets/ksbai123/Chime4/discussions/2\r\nLet's continue the discussion there!" ]
null
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
Add CHiME4 dataset
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1835/timeline
## Adding a Dataset - **Name:** Chime4 - **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR - **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results paper: - **Data:** http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/download.html - **Motivation:** So far there are very little datasets for speech in `datasets`. Only `lbirispeech_asr` so far. If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
803,524,790
https://api.github.com/repos/huggingface/datasets/issues/1835/comments
MDU6SXNzdWU4MDM1MjQ3OTA=
null
1,835
https://api.github.com/repos/huggingface/datasets/issues/1835/events
false
closed
2021-02-08T12:26:35Z
null
https://api.github.com/repos/huggingface/datasets/issues/1834
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Paethon", "id": 237550, "login": "Paethon", "node_id": "MDQ6VXNlcjIzNzU1MA==", "organizations_url": "https://api.github.com/users/Paethon/orgs", "received_events_url": "https://api.github.com/users/Paethon/received_events", "repos_url": "https://api.github.com/users/Paethon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "type": "User", "url": "https://api.github.com/users/Paethon" }
https://github.com/huggingface/datasets/pull/1834
[]
false
2021-02-08T12:42:50Z
2021-02-08T12:42:50Z
null
[ "OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue." ]
null
[]
Fixes base_url of limit dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/1834/timeline
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1834.diff", "html_url": "https://github.com/huggingface/datasets/pull/1834", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1834" }
803,517,094
https://api.github.com/repos/huggingface/datasets/issues/1834/comments
MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4
null
1,834
https://api.github.com/repos/huggingface/datasets/issues/1834/events
true
closed
2021-02-08T01:39:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/1833
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4", "events_url": "https://api.github.com/users/pjox/events{/privacy}", "followers_url": "https://api.github.com/users/pjox/followers", "following_url": "https://api.github.com/users/pjox/following{/other_user}", "gists_url": "https://api.github.com/users/pjox/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pjox", "id": 635220, "login": "pjox", "node_id": "MDQ6VXNlcjYzNTIyMA==", "organizations_url": "https://api.github.com/users/pjox/orgs", "received_events_url": "https://api.github.com/users/pjox/received_events", "repos_url": "https://api.github.com/users/pjox/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjox/subscriptions", "type": "User", "url": "https://api.github.com/users/pjox" }
https://github.com/huggingface/datasets/pull/1833
[]
false
2021-02-12T14:09:25Z
2021-02-12T14:08:24Z
null
[ "@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ", "I just merged the tables as suggested 😄 . However I noticed something weird, the train sizes are identical for both the original and deduplicated files ... This is not normal, in general the original files are almost twice as big as the deduplicated ones 🤔 ", "Good catch @pjox ! I just checked and this is because the scripts doesn't handle having several blank lines in a row.\r\nBlank lines introduced by deduplication are currently not ignored so we end up with the same number of examples in the dataset as the original version (but with empty examples...)\r\nI fixed that in this [commit](https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383). I'm re-running the metadata generation for deduplicated configs.", "I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow", "> I got the new sizes today, will update the dataset_infos.json and the dataset card tomorrow\r\n\r\ngreat, I just wanted to report that I got error message \"NonMatchingSplitsSizesError\" when I tried to load one of the oscar dataset.", "Hi @cahya-wirawan, which configuration of oscar do you have this issue with ?", "Ok I see you're having this issue because I haven't updated the sizes yet ! I'm opening a PR\r\n\r\nI just checked and indeed there's an issue with the `deduplicated` configurations since the commit I mentioned above.\r\nI'm fixing this by using the new sizes I got yesterday :) \r\n", "I just updated the size in the table @pjox it should be good now :) \r\nI also updated the sizes in the dataset_infos.json in https://github.com/huggingface/datasets/pull/1868 (merged)", "Thanks @lhoestq for fixing the issue, it works now", "Thank you so much @lhoestq !" ]
null
[]
Add OSCAR dataset card
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1833/timeline
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "html_url": "https://github.com/huggingface/datasets/pull/1833", "merged_at": "2021-02-12T14:08:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833" }
803,120,978
https://api.github.com/repos/huggingface/datasets/issues/1833/comments
MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx
null
1,833
https://api.github.com/repos/huggingface/datasets/issues/1833/events
true
closed
2021-02-07T06:52:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/1832
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}", "followers_url": "https://api.github.com/users/JimmyJim1/followers", "following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}", "gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JimmyJim1", "id": 68724553, "login": "JimmyJim1", "node_id": "MDQ6VXNlcjY4NzI0NTUz", "organizations_url": "https://api.github.com/users/JimmyJim1/orgs", "received_events_url": "https://api.github.com/users/JimmyJim1/received_events", "repos_url": "https://api.github.com/users/JimmyJim1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions", "type": "User", "url": "https://api.github.com/users/JimmyJim1" }
https://github.com/huggingface/datasets/issues/1832
[]
false
2021-02-08T17:27:29Z
2021-02-08T17:27:29Z
null
[]
completed
[]
Looks like nokogumbo is up-to-date now, so this is no longer needed.
NONE
https://api.github.com/repos/huggingface/datasets/issues/1832/timeline
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
https://api.github.com/repos/huggingface/datasets
null
802,880,897
https://api.github.com/repos/huggingface/datasets/issues/1832/comments
MDU6SXNzdWU4MDI4ODA4OTc=
null
1,832
https://api.github.com/repos/huggingface/datasets/issues/1832/events
false
closed
2021-02-07T05:33:36Z
null
https://api.github.com/repos/huggingface/datasets/issues/1831
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4", "events_url": "https://api.github.com/users/svjack/events{/privacy}", "followers_url": "https://api.github.com/users/svjack/followers", "following_url": "https://api.github.com/users/svjack/following{/other_user}", "gists_url": "https://api.github.com/users/svjack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/svjack", "id": 27874014, "login": "svjack", "node_id": "MDQ6VXNlcjI3ODc0MDE0", "organizations_url": "https://api.github.com/users/svjack/orgs", "received_events_url": "https://api.github.com/users/svjack/received_events", "repos_url": "https://api.github.com/users/svjack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/svjack/subscriptions", "type": "User", "url": "https://api.github.com/users/svjack" }
https://github.com/huggingface/datasets/issues/1831
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2021-02-25T14:10:18Z
2021-02-25T14:10:18Z
null
[ "Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so you can download all the raw data files by calling `_split_generators` with a download manager:\r\n```python\r\nfrom datasets import DownloadManager\r\nfrom datasets.load import import_main_class\r\n\r\nconll2003_builder = import_main_class(...)\r\n\r\ndl_manager = DownloadManager()\r\nsplis_generators = conll2003_builder._split_generators(dl_manager)\r\n```\r\n\r\nThen you can see what files have been downloaded with\r\n```python\r\ndl_manager.get_recorded_sizes_checksums()\r\n```\r\nIt returns a dictionary with the format {url: {num_bytes: int, checksum: str}}\r\n\r\nThen you can get the actual location of the downloaded files with\r\n```python\r\nfrom datasets import cached_path\r\n\r\nlocal_path_to_downloaded_file = cached_path(url)\r\n```\r\n\r\n------------------\r\n\r\nNote that you can also get the urls from the Dataset object:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nconll2003 = load_dataset(\"conll2003\")\r\nprint(conll2003[\"train\"].download_checksums)\r\n```\r\nIt returns the same dictionary with the format {url: {num_bytes: int, checksum: str}}", "I am afraid that there is not a very straightforward way to get that location.\r\n\r\nAnother option, from _split_generators would be to use:\r\n- `dl_manager._download_config.cache_dir` to get the directory where all the raw downloaded files are:\r\n ```python\r\n download_dir = dl_manager._download_config.cache_dir\r\n ```\r\n- the function `datasets.utils.file_utils.hash_url_to_filename` to get the filenames of the raw downloaded files:\r\n ```python\r\n filenames = [hash_url_to_filename(url) for url in urls_to_download.values()]\r\n ```\r\nTherefore the complete path to the raw downloaded files would be the join of both:\r\n```python\r\ndownloaded_paths = [os.path.join(download_dir, filename) for filename in filenames]\r\n```\r\n\r\nMaybe it would be interesting to make these paths accessible more easily. I could work on this. What do you think, @lhoestq ?", "Sure it would be nice to have an easier access to these paths !\r\nThe dataset builder could have a method to return those, what do you think ?\r\nFeel free to work on this @albertvillanova , it would be a nice addition :) \r\n\r\nYour suggestion does work as well @albertvillanova if you complete it by specifying `etag=` to `hash_url_to_filename`.\r\n\r\nThe ETag is obtained by a HEAD request and is used to know if the file on the remote host has changed. Therefore if a file is updated on the remote host, then the hash returned by `hash_url_to_filename` is different.", "Once #1846 will be merged, the paths to the raw downloaded files will be accessible as:\r\n```python\r\nbuilder_instance.dl_manager.downloaded_paths\r\n``` " ]
completed
[]
Some question about raw dataset download info in the project .
NONE
https://api.github.com/repos/huggingface/datasets/issues/1831/timeline
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic it seems that i can not have the raw dataset download location in variable in downloaded_files in _split_generators. If someone also want use huggingface datasets as raw dataset downloader, how can he retrieve the raw dataset download path from attributes in datasets.dataset_dict.DatasetDict ?
https://api.github.com/repos/huggingface/datasets
null
802,868,854
https://api.github.com/repos/huggingface/datasets/issues/1831/comments
MDU6SXNzdWU4MDI4Njg4NTQ=
null
1,831
https://api.github.com/repos/huggingface/datasets/issues/1831/events
false
open
2021-02-06T21:00:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1830
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4", "events_url": "https://api.github.com/users/wumpusman/events{/privacy}", "followers_url": "https://api.github.com/users/wumpusman/followers", "following_url": "https://api.github.com/users/wumpusman/following{/other_user}", "gists_url": "https://api.github.com/users/wumpusman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wumpusman", "id": 7662740, "login": "wumpusman", "node_id": "MDQ6VXNlcjc2NjI3NDA=", "organizations_url": "https://api.github.com/users/wumpusman/orgs", "received_events_url": "https://api.github.com/users/wumpusman/received_events", "repos_url": "https://api.github.com/users/wumpusman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wumpusman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wumpusman/subscriptions", "type": "User", "url": "https://api.github.com/users/wumpusman" }
https://github.com/huggingface/datasets/issues/1830
[]
false
2021-02-24T21:56:14Z
null
null
[ "Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your `map` for the cache\r\n2. apply your function on every batch\r\n\r\nThis can explain the time difference between your different experiments.\r\n\r\nThe hash computation time depends of how complex your function is. For a tokenizer, the hash computation scans the lists of the words in the tokenizer to identify this tokenizer. Usually it takes 2-3 seconds.\r\n\r\nAlso note that you can disable caching though using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```", "Hi @lhoestq ,\r\n\r\nThanks for the reply. It's entirely possible that is the issue. Since it's a side project I won't be looking at it till later this week, but, I'll verify it by disabling caching and hopefully I'll see the same runtime. \r\n\r\nAppreciate the reference,\r\n\r\nMichael", "I believe this is an actual issue, tokenizing a ~4GB txt file went from an hour and a half to ~10 minutes when I switched from my pre-trained tokenizer(on the same dataset) to the default gpt2 tokenizer.\r\nBoth were loaded using:\r\n```\r\nAutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)\r\n```\r\nI trained the tokenizer using ByteLevelBPETokenizer from the Tokenizers library and save it to a tokenizer.json file.\r\n\r\nI have tested the caching ideas above, changing the number of process, the TOKENIZERS_PARALLELISM env variable, keep_in_memory=True and batching with different sizes.\r\n\r\nApologies I can't really upload much code, but wanted to back up the finding and hopefully a fix/the problem can be found.\r\nI will comment back if I find a fix as well.", "Hi @johncookds do you think this can come from one tokenizer being faster than the other one ? Can you try to compare their speed without using `datasets` just to make sure ?", "Hi yes, I'm closing the loop here with some timings below. The issue seems to be at least somewhat/mainly with the tokenizer's themselves. Moreover legacy saves of the trainer tokenizer perform faster but differently than the new tokenizer.json saves(note nothing about the training process/adding of special tokens changed between the top two trained tokenizer tests, only the way it was saved). This is only a 3x slowdown vs like a 10x but I think the slowdown is most likely due to this.\r\n\r\n```\r\ntrained tokenizer - tokenizer.json save (same results for AutoTokenizer legacy_format=False):\r\nTokenizer time(seconds): 0.32767510414123535\r\nTokenized avg. length: 323.01\r\n\r\ntrained tokenizer - AutoTokenizer legacy_format=True:\r\nTokenizer time(seconds): 0.09258866310119629\r\nTokenized avg. length: 301.01\r\n\r\nGPT2 Tokenizer from huggingface\r\nTokenizer time(seconds): 0.1010282039642334\r\nTokenized avg. length: 461.21\r\n```", "@lhoestq ,\r\n\r\nHi, which version of datasets has datasets.set_caching_enabled(False)? I get \r\nmodule 'datasets' has no attribute 'set_caching_enabled'. To hopefully get around this, I reran my code on a new set of data, and did so only once.\r\n\r\n@johncookds , thanks for chiming in, it looks this might be an issue of Tokenizer.\r\n\r\n**Tokenizer**: The runtime of GPT2TokenizerFast.from_pretrained(\"gpt2\") on 1000 chars is: **143 ms**\r\n**SlowTokenizer**: The runtime of a locally saved and loaded Tokenizer using the same vocab on 1000 chars is: **4.43 s**\r\n\r\nThat being said, I compared performance on the map function:\r\n\r\nRunning Tokenizer versus using it in the map function for 1000 chars goes from **141 ms** to **356 ms** \r\nRunning SlowTokenizer versus using it in the map function for 1000 chars with a single element goes from **4.43 s** to **9.76 s**\r\n\r\nI'm trying to figure out why the overhead of map would increase the time by double (figured it would be a fixed increase in time)? Though maybe this is expected behavior.\r\n\r\n@lhoestq, do you by chance know how I can redirect this issue to Tokenizer?\r\n\r\nRegards,\r\n\r\nMichael", "Thanks for the experiments @johncookds and @wumpusman ! \r\n\r\n> Hi, which version of datasets has datasets.set_caching_enabled(False)?\r\n\r\nCurrently you have to install `datasets` from source to have this feature, but this will be available in the next release in a few days.\r\n\r\n> I'm trying to figure out why the overhead of map would increase the time by double (figured it would be a fixed increase in time)? Though maybe this is expected behavior.\r\n\r\nCould you also try with double the number of characters ? This should let us have an idea of the fixed cost (hashing) and the dynamic cost (actual tokenization, grows with the size of the input)\r\n\r\n> @lhoestq, do you by chance know how I can redirect this issue to Tokenizer?\r\n\r\nFeel free to post an issue on the `transformers` repo. Also I'm sure there should be related issues so you can also look for someone with the same concerns on the `transformers` repo.", "@lhoestq,\r\n\r\nI just checked that previous run time was actually 3000 chars. I increased it to 6k chars, again, roughly double.\r\n\r\nSlowTokenizer **7.4 s** to **15.7 s**\r\nTokenizer: **276 ms** to **616 ms**\r\n\r\nI'll post this issue on Tokenizer, seems it hasn't quite been raised (albeit I noticed a similar issue that might relate).\r\n\r\nRegards,\r\n\r\nMichael", "Hi, \r\nI'm following up here as I found my exact issue. It was with saving and re-loading the tokenizer. When I trained then processed the data without saving and reloading it, it was 10x-100x faster than when I saved and re-loaded it.\r\nBoth resulted in the exact same tokenized datasets as well. \r\nThere is additionally a bug where the older legacy tokenizer save does not preserve a learned tokenizing behavior if trained from scratch.\r\nUnderstand its not exactly Datasets related but hope it can help someone if they have the same issue.\r\nThanks!" ]
null
[]
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
NONE
https://api.github.com/repos/huggingface/datasets/issues/1830/timeline
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_unique = set(text.split(" ")) for i in words_unique: original_tokenizer.add_tokens(i) original_tokenizer.save_pretrained(path) tokenizer2 = GPT2Tokenizer.from_pretrained(os.path.join(experiment_path,experiment_name,"tokenizer_squad")) train_set_baby=Dataset.from_dict({"text":[train_set["text"][0][0:50]]}) ```` I then applied the dataset map function on a fairly small set of text: ``` %%time train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True) ``` The run time for train_set_baby.map was 6 seconds, and the batch itself was 2.6 seconds **100% 1/1 [00:02<00:00, 2.60s/ba] CPU times: user 5.96 s, sys: 36 ms, total: 5.99 s Wall time: 5.99 s** In comparison using (even after adding additional tokens): ` tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")` ``` %%time train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True) ``` The time is **100% 1/1 [00:00<00:00, 34.09ba/s] CPU times: user 68.1 ms, sys: 16 µs, total: 68.1 ms Wall time: 62.9 ms** It seems this might relate to the tokenizer save or load function, however, the issue appears to come up when I apply the loaded tokenizer to the map function. I should also add that playing around with the amount of words I add to the tokenizer before I save it to disk and load it into memory appears to impact the time it takes to run the map function.
https://api.github.com/repos/huggingface/datasets
null
802,790,075
https://api.github.com/repos/huggingface/datasets/issues/1830/comments
MDU6SXNzdWU4MDI3OTAwNzU=
null
1,830
https://api.github.com/repos/huggingface/datasets/issues/1830/events
false
closed
2021-02-06T12:36:25Z
null
https://api.github.com/repos/huggingface/datasets/issues/1829
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/1829
[]
false
2021-02-08T13:17:54Z
2021-02-08T13:17:53Z
null
[]
null
[]
Add Tweet Eval Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1829/timeline
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/mapping.txt). 3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset. Requesting @lhoestq to review.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "html_url": "https://github.com/huggingface/datasets/pull/1829", "merged_at": "2021-02-08T13:17:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829" }
802,693,600
https://api.github.com/repos/huggingface/datasets/issues/1829/comments
MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5
null
1,829
https://api.github.com/repos/huggingface/datasets/issues/1829/events
true
closed
2021-02-05T20:20:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/1828
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/1828
[]
false
2021-02-18T14:17:07Z
2021-02-18T14:17:07Z
null
[ "Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification or object detection datasets instead? (Your CIFAR-100 contribution will be super useful for example!)", "Hi @yjernite, You're welcome. I am enjoying adding new datasets :)\r\nBy \"pretty problematic\", are you referring to the ethical issues? I used TFDS's [CelebA](https://github.com/tensorflow/datasets/blob/5ef7861470896acb6f74dacba85036001e4f1b8c/tensorflow_datasets/image/celeba.py#L91) as a reference. Here they mention in a \"Note\" that CelebA \"may contain potential bias\". Can we not do the same? I skipped the note for now, and we can add it. However, if you feel this isn't the right time, then I won't pursue this further. \r\n\r\nBut, can this issue be handled at a later stage? Does this also apply for my Hateful Memes Issue #1810?\r\n\r\nAlso, how can I \r\n1. load a part of the dataset? since `load_dataset(<>,split='train[10:20]')` still loads all the examples.\r\n2. make `datasets_infos.json` for huge datasets which have a single configuration?\r\n\r\nI will ofcourse be looking for other datasets to add regardless. \r\n", "It's definitely a thorny question. The short answer is: Hateful Memes and hate speech detection datasets are different since their use case is specifically to train systems to identify and hopefully remove hateful content, whereas the purpose of a dataset that has an Attractiveness score as output is implicitly to train more models to rate \"Attractiveness\". \r\n\r\nAs far as warning about the \"potential biases\", I do not think it is quite enough, especially because it is hard to guarantee that every potential user will read the documentation (it is also an insufficient warning.)\r\n\r\nNote that we do have higher standards for the dataset cards of hate speech and hateful memes datasets, so if you do choose to add that one yourself we will ask that you summarize the relevant literature in the Social Impact section.\r\n\r\nIf you really need to add this dataset for your own research for the explicit purpose of studying these biases, you can add it as a community provided dataset following https://huggingface.co/docs/datasets/master/share_dataset.html#sharing-a-community-provided-dataset but I'd recommend just skipping it for now.", "So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\nhttps://huggingface.co/docs/datasets/master/filesystems.html\r\n", "I don't think we have a great solution for `dataset_infos.json` with a single very large config when storage space is an issue, but it should be solved by the same upcoming feature mentioned above", "Okay, then I won't pursue this one further. I'll keep this branch on my repository just in case the possibility of adding this dataset comes up in the future.\r\n\r\n> So currently you do need to download the whole dataset when using it, we are working on making it easier to stream parts of it from a remote host. You can also use the filesystem integration if local storage is an issue:\r\n> https://huggingface.co/docs/datasets/master/filesystems.html\r\n\r\nAfter downloading the whole dataset (around 1.4GB), it still loads all the examples despite using `split='train[:10%]'` or `split='train[10:20]'`. \r\n\r\nEDIT: I think this would happen only when the examples are generated for the first time and saved to the cache. Streaming parts of the data from a remote host sounds amazing! But, would that also allow for streaming examples of the data from the local cache? (without saving all the examples the first time).\r\n\r\nWhat I used:\r\n`d = load_dataset('./datasets/celeb_a',split='train[:10]')`\r\nOutput:\r\n`570 examples [01:33, 6.25 examples/s]` and it keeps going. \r\n\r\nEDIT 2: After a few thousand images, I get the following error:\r\n```python\r\nOSError: [Errno 24] Too many open files: '~/.cache/huggingface/datasets/celeb_a/default/1.1.0/01f9dca66039ab7c40b91b09af47a5fa8c3e49dc8d55df50da55b14116229207.incomplete'\r\n```\r\nI understand this is because of the way I load the images :\r\n```python\r\nImage.open(<path>)\r\n```\r\nWhat could be better alternative? I am only asking in case I face the same issues in the future.", "Just some addition about loading only a subset of the data:\r\nCurrently if even you specify `split='train[:10]'`, it downloads and generate the full dataset, so that you can pick another part afterward if you want to. We may change that in the future and use streaming.\r\n\r\nAnd about your open files issue, you can try to close each image file after reading its content.", "Hi @lhoestq,\r\nThanks for your response.\r\n\r\nI used `gc.collect()` inside the loop and that worked for me. I think since we are using a generator, and if I have something like `train[100000:100002]`, we will need to generate the first 1000001 examples and store. Ofcourse, this feature isn't a necessity right now, I suppose.", "Closing this PR." ]
null
[]
Add CelebA Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1828/timeline
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1828.diff", "html_url": "https://github.com/huggingface/datasets/pull/1828", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1828.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1828" }
802,449,234
https://api.github.com/repos/huggingface/datasets/issues/1828/comments
MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2
null
1,828
https://api.github.com/repos/huggingface/datasets/issues/1828/events
true
closed
2021-02-05T17:43:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/1827
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/issues/1827
[]
false
2021-02-18T13:55:16Z
2021-02-18T13:55:16Z
null
[ "Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature", "Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using this feature, though :)\r\n\r\nI wanted to ask about on-the-fly data loading from the cache (before pre-processing).", "Hi ! Currently when you load a dataset via `load_dataset` for example, then the dataset is memory-mapped from an Arrow file on disk. Therefore there's almost no RAM usage even if your dataset contains TB of data.\r\nUsually at training time only one batch of data at a time is loaded in memory.\r\n\r\nDoes that answer your question or were you thinking about something else ?", "Hi @lhoestq,\r\n\r\nI apologize for the late response. This answers my question. Thanks a lot." ]
completed
[]
Regarding On-the-fly Data Loading
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1827/timeline
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
https://api.github.com/repos/huggingface/datasets
null
802,353,974
https://api.github.com/repos/huggingface/datasets/issues/1827/comments
MDU6SXNzdWU4MDIzNTM5NzQ=
null
1,827
https://api.github.com/repos/huggingface/datasets/issues/1827/events
false
closed
2021-02-05T11:07:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/1826
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/1826
[]
false
2021-02-09T17:39:27Z
2021-02-09T17:39:27Z
null
[]
null
[]
Print error message with filename when malformed CSV
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1826/timeline
Print error message specifying filename when malformed CSV file. Close #1821
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1826.diff", "html_url": "https://github.com/huggingface/datasets/pull/1826", "merged_at": "2021-02-09T17:39:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1826" }
802,074,744
https://api.github.com/repos/huggingface/datasets/issues/1826/comments
MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2
null
1,826
https://api.github.com/repos/huggingface/datasets/issues/1826/events
true
closed
2021-02-05T11:06:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/1825
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avacaondata", "id": 35173563, "login": "avacaondata", "node_id": "MDQ6VXNlcjM1MTczNTYz", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "repos_url": "https://api.github.com/users/avacaondata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "type": "User", "url": "https://api.github.com/users/avacaondata" }
https://github.com/huggingface/datasets/issues/1825
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2021-03-30T14:04:01Z
2021-03-16T09:44:00Z
null
[ "Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which can take a lot of space. Padding can also increase the size of the tokenized dataset.\r\n\r\nTo make things more convenient, we recently added a \"lazy map\" feature that allows to tokenize each batch at training time as you mentioned. For example you'll be able to do\r\n```python\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\n\r\ndef encode(batch):\r\n return tokenizer(batch[\"text\"], padding=\"longest\", truncation=True, max_length=512, return_tensors=\"pt\")\r\n\r\ndataset.set_transform(encode)\r\nprint(dataset.format)\r\n# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}\r\nprint(dataset[:2])\r\n# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}\r\n\r\n```\r\nIn this example the `encode` transform is applied on-the-fly on the \"text\" column.\r\n\r\nThis feature will be available in the next release 2.0 which will happen in a few days.\r\nYou can already play with it by installing `datasets` from source if you want :)\r\n\r\nHope that helps !", "How recently was `set_transform` added? I am actually trying to implement it and getting an error:\r\n\r\n`AttributeError: 'Dataset' object has no attribute 'set_transform'\r\n`\r\n\r\nI'm on v.1.2.1.\r\n\r\nEDIT: Oh, wait I see now it's in the v.2.0. Whoops! This should be really useful.", "Yes indeed it was added a few days ago. The code is available on master\r\nWe'll do a release next week :)\r\n\r\nFeel free to install `datasets` from source to try it out though, I would love to have some feedbacks", "For information: it's now available in `datasets` 1.3.0.\r\nThe 2.0 is reserved for even cooler features ;)", "Hi @alexvaca0 , we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs." ]
completed
[]
Datasets library not suitable for huge text datasets.
NONE
https://api.github.com/repos/huggingface/datasets/issues/1825/timeline
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this big, but for fine-tuning datasets, as this process alone takes so much time, usually in expensive machines (due to the need of tpus - gpus) which is not being used for training. It would possibly be more efficient in such cases to tokenize each batch at training time (receive batch - tokenize batch - train with batch), so that the whole time the machine is up it's being used for training. Moreover, the pyarrow objects created from a 187 GB datasets are huge, I mean, we always receive OOM, or No Space left on device errors when only 10-12% of the dataset has been processed, and only that part occupies 2.1TB in disk, which is so many times the disk usage of the pure text (and this doesn't make sense, as tokenized texts should be lighter than pure texts). Any suggestions??
https://api.github.com/repos/huggingface/datasets
null
802,073,925
https://api.github.com/repos/huggingface/datasets/issues/1825/comments
MDU6SXNzdWU4MDIwNzM5MjU=
null
1,825
https://api.github.com/repos/huggingface/datasets/issues/1825/events
false
closed
2021-02-05T10:30:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/1824
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/1824
[]
false
2021-05-05T18:24:14Z
2021-02-08T11:30:33Z
null
[ "Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:", "Next week !", "Closing in favor of #1833" ]
null
[]
Add OSCAR dataset card
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/1824/timeline
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D Cc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1824.diff", "html_url": "https://github.com/huggingface/datasets/pull/1824", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1824" }
802,048,281
https://api.github.com/repos/huggingface/datasets/issues/1824/comments
MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3
null
1,824
https://api.github.com/repos/huggingface/datasets/issues/1824/events
true
closed
2021-02-05T10:22:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/1823
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/1823
[]
false
2021-03-01T11:56:20Z
2021-03-01T10:21:39Z
null
[ "Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?", "Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What do you think ?", "Hi @lhoestq,\r\n\r\nSorry again, the last couple of weeks were a bit busy for me. I am wondering how do you want me to achieve that. Using a custom BuilderConfig which takes in whether it is the regular data or \"pid2name\"? \"pid2name\" is only useful for \"train_wiki\", \"val_nyt\" and \"val_wiki\". So, based on my understanding, it would look like this:\r\n\r\n```python\r\nwiki_data = load_dataset('few_rel','train_wiki')\r\nid2name = load_dataset('few_rel','pid2name')\r\n```\r\nand this will be handled in the multiple configs.\r\n\r\n\r\nA better alternative could be providing name of the relationship in only \"train_wiki\", \"val_nyt\" and \"val_wiki\" as an extra feature in the dataset, and doing away with \"pid2name\" entirely. I'll only download pid2name if any of those datasets are requested, and then during generation I'll return the list with the dataset under \"names\" feature. How does this sound?\r\n\r\nEDIT:\r\nThere is one issue with the second approach, the entire pid2name is saved with all three datasets - \"train_wiki\", \"val_nyt\" and \"val_wiki\" ([see code below](https://github.com/huggingface/datasets/pull/1823#issuecomment-786402026)). In dummy data, I can address this by manually editing the pid2name to contain only a few id-name pairs, those matching with the examples in the corresponding example file. But this seems to be inefficient for the entire dataset - storing the same file in multiple places.", "Okay, I apologize, I guess I finally understand what is required.\r\n\r\nBasically, using:\r\n\r\n```python\r\nfew_rel = load_dataset('few_rel')\r\n```\r\nshould give all the files. This seems difficult since \"pid2name\" has a different format. Any suggestions on this?", "Yes that's it, sorry if that wasn't clear !", "Hi @lhoestq,\n\nSince pid2name has different features from the rest of the files, how will I add them to the same config?\n\nDo we want to exclude pid2name totally and add \"names\" to every example?", "If I understand correctly each sample in the \"default\" config has one relation, and each relation has corresponding names in pid2name.\r\nWould it be possible to also include the names in the \"default\" configuration for each sample ? The names of one sample can be retrieved using the relation id no ?", "Yes, that can be done. But for some files, the name is already given instead of ID. Only \"train_wiki\", \"val_wiki\", \"val_nyc\" have IDs. For others, I can set the names equal to a list of key.", "I think that's fine as long as we mention this processing explicitly in the dataset card.", "Hi @lhoestq,\r\n\r\nI have added the changes. Please let me know in case of any remaining issues.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nThanks for fixing it and approving :)" ]
null
[]
Add FewRel Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1823/timeline
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `"relation"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `"relation":""` in the dictionary. Please recommend better alternatives, if any. Thanks, Gunjan
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1823.diff", "html_url": "https://github.com/huggingface/datasets/pull/1823", "merged_at": "2021-03-01T10:21:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1823.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1823" }
802,042,181
https://api.github.com/repos/huggingface/datasets/issues/1823/comments
MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx
null
1,823
https://api.github.com/repos/huggingface/datasets/issues/1823/events
true
closed
2021-02-05T09:30:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/1822
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4", "events_url": "https://api.github.com/users/avinsit123/events{/privacy}", "followers_url": "https://api.github.com/users/avinsit123/followers", "following_url": "https://api.github.com/users/avinsit123/following{/other_user}", "gists_url": "https://api.github.com/users/avinsit123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avinsit123", "id": 33565881, "login": "avinsit123", "node_id": "MDQ6VXNlcjMzNTY1ODgx", "organizations_url": "https://api.github.com/users/avinsit123/orgs", "received_events_url": "https://api.github.com/users/avinsit123/received_events", "repos_url": "https://api.github.com/users/avinsit123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avinsit123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinsit123/subscriptions", "type": "User", "url": "https://api.github.com/users/avinsit123" }
https://github.com/huggingface/datasets/pull/1822
[]
false
2021-02-15T09:57:39Z
2021-02-15T09:57:39Z
null
[ "Could you also run `make style` to fix the CI check on code formatting ?", "@lhoestq completed and resolved all comments." ]
null
[]
Add Hindi Discourse Analysis Natural Language Inference Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/1822/timeline
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : https://www.aclweb.org/anthology/2020.aacl-main.71 - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Premise and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages - Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - train, test and dev files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1} ``` ### Data Fields - Each row contatins 4 columns - premise, hypothesis, label and topic. ### Data Splits - Train : 31892 - Valid : 9460 - Test : 9970 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71 ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ - The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases. #### Who are the source language producers? Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically by machine and corresponding recasting process. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases No known bias exist in the dataset. Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations . Size of data may not be enough to train large models ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/midas-research/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ``` ### Contributions Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/1822.diff", "html_url": "https://github.com/huggingface/datasets/pull/1822", "merged_at": "2021-02-15T09:57:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1822.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1822" }
802,003,835
https://api.github.com/repos/huggingface/datasets/issues/1822/comments
MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz
null
1,822
https://api.github.com/repos/huggingface/datasets/issues/1822/events
true