url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.79B
| node_id
stringlengths 18
32
| number
int64 1
6.01k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | comments
sequence | created_at
int64 1,587B
1,689B
| updated_at
int64 1,588B
1,689B
| closed_at
int64 1,587B
1,689B
β | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
float64 0
1
β | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/98 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/98/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/98/comments | https://api.github.com/repos/huggingface/datasets/issues/98/events | https://github.com/huggingface/datasets/pull/98 | 617,957,739 | MDExOlB1bGxSZXF1ZXN0NDE3Nzc3NDcy | 98 | Webis tl-dr | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?",
"> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!",
"@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ",
"There is dummy_data here, no ?",
"Yeah I think naming it webis/tl_dr would be best @jplu if that works for you",
"No problem at all!! On it^^",
"> There is dummy_data here, no ?\r\n\r\nSome paths were wrong - the structure is really confusing and the error messages don't really help either - I have to think about how to make this easier to understand!\r\n\r\nHope it was ok that I fiddled with your PR !",
"> Some paths were wrong - the structure is really confusing and the error message don't really help either - I have to think about how to make this easier to understand!\r\n\r\nOh ok! I haven't noticed that sorry :(\r\n\r\n> Hope it was ok that I fiddled with your PR !\r\n\r\nOf course it was ok :)",
"@julien-c Looks like what you have in mind?\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"datasets/webis\", \"tl_dr\")\r\n\r\n#Output: Downloading and preparing dataset webis/tl_dr (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/webis/tl_dr/1.0.0...\r\n```",
"Merging this for now. Maybe we can see whether to rename it in a different PR @julien-c ? \r\n",
"Hi, \r\nAuthor here of the webis-tldr corpus. Any plans on integrating this dataset into the hub? I remember we could access it in the previous versions of the library. If there is a particular issue that I can help with, do let me know.\r\n\r\nThanks!",
"Hi @shahbazsyed, this dataset _is_ inside the hub but it's namespaced by the organization name `webis`.\r\n\r\nYou can load it following the steps described in https://huggingface.co/datasets/webis/tl_dr\r\n\r\nHere's a Colab showcasing that it works: https://colab.research.google.com/drive/11IrzRVpnMLJZ8_UFFHLR8FhiajjAHRUU?usp=sharing\r\n\r\nThe reason the code is in S3 and not in this repo is that the dataset is namespaced under the `webis` organization. We don't have a lot of namespaced datasets yet but this should become the main way we add more datasets in the future.\r\nLet us know if that's an issue for you. Thank you!"
] | 1,589,437,338,000 | 1,599,127,221,000 | 1,589,489,656,000 | CONTRIBUTOR | null | Add the Webid TL:DR dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/98/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/98/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/98",
"html_url": "https://github.com/huggingface/datasets/pull/98",
"diff_url": "https://github.com/huggingface/datasets/pull/98.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/98.patch",
"merged_at": "2020-05-14T20:54:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/97 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/97/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/97/comments | https://api.github.com/repos/huggingface/datasets/issues/97/events | https://github.com/huggingface/datasets/pull/97 | 617,809,431 | MDExOlB1bGxSZXF1ZXN0NDE3NjU4MDcy | 97 | [Csv] add tests for csv dataset script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@thomwolf - can you check and merge if ok? "
] | 1,589,411,171,000 | 1,589,412,196,000 | 1,589,412,195,000 | MEMBER | null | Adds dummy data tests for csv. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/97/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/97/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/97",
"html_url": "https://github.com/huggingface/datasets/pull/97",
"diff_url": "https://github.com/huggingface/datasets/pull/97.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/97.patch",
"merged_at": "2020-05-13T23:23:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/96 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/96/comments | https://api.github.com/repos/huggingface/datasets/issues/96/events | https://github.com/huggingface/datasets/pull/96 | 617,739,521 | MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4 | 96 | lm1b | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..."
] | 1,589,402,324,000 | 1,589,465,610,000 | 1,589,465,609,000 | CONTRIBUTOR | null | Add lm1b dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/96/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/96/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/96",
"html_url": "https://github.com/huggingface/datasets/pull/96",
"diff_url": "https://github.com/huggingface/datasets/pull/96.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/96.patch",
"merged_at": "2020-05-14T14:13:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/95 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/95/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/95/comments | https://api.github.com/repos/huggingface/datasets/issues/95/events | https://github.com/huggingface/datasets/pull/95 | 617,703,037 | MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4 | 95 | Replace checksums files by Dataset infos json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great! LGTM :-) ",
"> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? "
] | 1,589,398,576,000 | 1,589,446,723,000 | 1,589,446,722,000 | MEMBER | null | ### Better verifications when loading a dataset
I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`.
It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config.
The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR.
### Renaming
According to these changes, I did some renaming:
`save_checksums` -> `save_infos`
`ignore_checksums` -> `ignore_verifications`
for example, when you are creating a dataset you have to run
```nlp-cli test path/to/my/dataset --save_infos --all_configs```
instead of
```nlp-cli test path/to/my/dataset --save_checksums --all_configs```
### And now, the fun part
We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets
-----------------
feedback appreciated ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/95/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/95/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/95",
"html_url": "https://github.com/huggingface/datasets/pull/95",
"diff_url": "https://github.com/huggingface/datasets/pull/95.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/95.patch",
"merged_at": "2020-05-14T08:58:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/94 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/94/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/94/comments | https://api.github.com/repos/huggingface/datasets/issues/94/events | https://github.com/huggingface/datasets/pull/94 | 617,571,340 | MDExOlB1bGxSZXF1ZXN0NDE3NDYyMTIw | 94 | Librispeech | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D "
] | 1,589,385,854,000 | 1,589,405,343,000 | 1,589,405,342,000 | CONTRIBUTOR | null | Add librispeech dataset and remove some useless content. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/94/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/94/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/94",
"html_url": "https://github.com/huggingface/datasets/pull/94",
"diff_url": "https://github.com/huggingface/datasets/pull/94.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/94.patch",
"merged_at": "2020-05-13T21:29:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/93 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/93/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/93/comments | https://api.github.com/repos/huggingface/datasets/issues/93/events | https://github.com/huggingface/datasets/pull/93 | 617,522,029 | MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy | 93 | Cleanup notebooks and various fixes | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,381,938,000 | 1,589,382,108,000 | 1,589,382,107,000 | MEMBER | null | Fixes on dataset (more flexible) metrics (fix) and general clean ups | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/93/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/93/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/93",
"html_url": "https://github.com/huggingface/datasets/pull/93",
"diff_url": "https://github.com/huggingface/datasets/pull/93.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/93.patch",
"merged_at": "2020-05-13T15:01:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/92 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/92/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/92/comments | https://api.github.com/repos/huggingface/datasets/issues/92/events | https://github.com/huggingface/datasets/pull/92 | 617,341,505 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky | 92 | [WIP] add wmt14 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,366,523,000 | 1,589,627,858,000 | 1,589,627,857,000 | MEMBER | null | WMT14 takes forever to download :-/
- WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/92/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/92/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/92",
"html_url": "https://github.com/huggingface/datasets/pull/92",
"diff_url": "https://github.com/huggingface/datasets/pull/92.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/92.patch",
"merged_at": "2020-05-16T11:17:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/91 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/91/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/91/comments | https://api.github.com/repos/huggingface/datasets/issues/91/events | https://github.com/huggingface/datasets/pull/91 | 617,339,484 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc0MjA0 | 91 | [Paracrawl] add paracrawl | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,366,340,000 | 1,589,366,415,000 | 1,589,366,414,000 | MEMBER | null | - Huge dataset - took ~1h to download
- Also this PR reformats all dataset scripts and adds `datasets` to `make style` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/91/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/91/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/91",
"html_url": "https://github.com/huggingface/datasets/pull/91",
"diff_url": "https://github.com/huggingface/datasets/pull/91.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/91.patch",
"merged_at": "2020-05-13T10:40:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/90 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/90/comments | https://api.github.com/repos/huggingface/datasets/issues/90/events | https://github.com/huggingface/datasets/pull/90 | 617,311,877 | MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0 | 90 | Add download gg drive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"awesome - so no manual downloaded needed here? ",
"Yes exactly. It works like a standard download"
] | 1,589,363,762,000 | 1,589,373,988,000 | 1,589,364,331,000 | MEMBER | null | We can now add datasets that download from google drive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/90/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/90/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/90",
"html_url": "https://github.com/huggingface/datasets/pull/90",
"diff_url": "https://github.com/huggingface/datasets/pull/90.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/90.patch",
"merged_at": "2020-05-13T10:05:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/89 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/89/comments | https://api.github.com/repos/huggingface/datasets/issues/89/events | https://github.com/huggingface/datasets/pull/89 | 617,295,069 | MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4 | 89 | Add list and inspect methods - cleanup hf_api | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,362,215,000 | 1,589,378,700,000 | 1,589,362,390,000 | MEMBER | null | Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3:
```python
nlp.list_datasets()
nlp.list_metrics()
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_dataset(path, local_path)
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_metric(path, local_path)
```
Also clean up the `HfAPI` to use `dataclasses` for better user-experience | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/89/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/89/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/89",
"html_url": "https://github.com/huggingface/datasets/pull/89",
"diff_url": "https://github.com/huggingface/datasets/pull/89.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/89.patch",
"merged_at": "2020-05-13T09:33:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/88 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/88/comments | https://api.github.com/repos/huggingface/datasets/issues/88/events | https://github.com/huggingface/datasets/pull/88 | 617,284,664 | MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw | 88 | Add wiki40b | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "
] | 1,589,361,361,000 | 1,589,373,115,000 | 1,589,373,114,000 | MEMBER | null | This one is a beam dataset that downloads files using tensorflow.
I tested it on a small config and it works fine | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/88/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/88",
"html_url": "https://github.com/huggingface/datasets/pull/88",
"diff_url": "https://github.com/huggingface/datasets/pull/88.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/88.patch",
"merged_at": "2020-05-13T12:31:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/87 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/87/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/87/comments | https://api.github.com/repos/huggingface/datasets/issues/87/events | https://github.com/huggingface/datasets/pull/87 | 617,267,118 | MDExOlB1bGxSZXF1ZXN0NDE3MjE1NzA0 | 87 | Add Flores | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,359,889,000 | 1,589,361,814,000 | 1,589,361,813,000 | MEMBER | null | Beautiful language for sure! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/87/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/87/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/87",
"html_url": "https://github.com/huggingface/datasets/pull/87",
"diff_url": "https://github.com/huggingface/datasets/pull/87.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/87.patch",
"merged_at": "2020-05-13T09:23:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/86 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/86/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/86/comments | https://api.github.com/repos/huggingface/datasets/issues/86/events | https://github.com/huggingface/datasets/pull/86 | 617,260,972 | MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2 | 86 | [Load => load_dataset] change naming | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,359,380,000 | 1,589,359,858,000 | 1,589,359,857,000 | MEMBER | null | Rename leftovers @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/86/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/86/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/86",
"html_url": "https://github.com/huggingface/datasets/pull/86",
"diff_url": "https://github.com/huggingface/datasets/pull/86.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/86.patch",
"merged_at": "2020-05-13T08:50:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/85 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/85/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/85/comments | https://api.github.com/repos/huggingface/datasets/issues/85/events | https://github.com/huggingface/datasets/pull/85 | 617,253,428 | MDExOlB1bGxSZXF1ZXN0NDE3MjA0ODA4 | 85 | Add boolq | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome :-) Thanks for adding the function to the Mock DL Manager"
] | 1,589,358,747,000 | 1,589,360,979,000 | 1,589,360,978,000 | MEMBER | null | I just added the dummy data for this dataset.
This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/85/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/85/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/85",
"html_url": "https://github.com/huggingface/datasets/pull/85",
"diff_url": "https://github.com/huggingface/datasets/pull/85.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/85.patch",
"merged_at": "2020-05-13T09:09:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/84 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/84/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/84/comments | https://api.github.com/repos/huggingface/datasets/issues/84/events | https://github.com/huggingface/datasets/pull/84 | 617,249,815 | MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz | 84 | [TedHrLr] add left dummy data | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,358,440,000 | 1,589,358,562,000 | 1,589,358,561,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/84/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/84/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/84",
"html_url": "https://github.com/huggingface/datasets/pull/84",
"diff_url": "https://github.com/huggingface/datasets/pull/84.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/84.patch",
"merged_at": "2020-05-13T08:29:21"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/83 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/83/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/83/comments | https://api.github.com/repos/huggingface/datasets/issues/83/events | https://github.com/huggingface/datasets/pull/83 | 616,863,601 | MDExOlB1bGxSZXF1ZXN0NDE2ODkyOTUz | 83 | New datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,307,747,000 | 1,589,307,767,000 | 1,589,307,765,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/83/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/83/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/83",
"html_url": "https://github.com/huggingface/datasets/pull/83",
"diff_url": "https://github.com/huggingface/datasets/pull/83.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/83.patch",
"merged_at": "2020-05-12T18:22:45"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/82 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/82/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/82/comments | https://api.github.com/repos/huggingface/datasets/issues/82/events | https://github.com/huggingface/datasets/pull/82 | 616,805,194 | MDExOlB1bGxSZXF1ZXN0NDE2ODQ1Njc5 | 82 | [Datasets] add ted_hrlr | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,302,010,000 | 1,589,356,374,000 | 1,589,356,373,000 | MEMBER | null | @thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework.
The result looks like this:
![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png)
you can see that each split has a `translation` key which value is the nlp.features.Translation object.
That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/82/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/82/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/82",
"html_url": "https://github.com/huggingface/datasets/pull/82",
"diff_url": "https://github.com/huggingface/datasets/pull/82.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/82.patch",
"merged_at": "2020-05-13T07:52:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/81 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/81/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/81/comments | https://api.github.com/repos/huggingface/datasets/issues/81/events | https://github.com/huggingface/datasets/pull/81 | 616,793,010 | MDExOlB1bGxSZXF1ZXN0NDE2ODM1NzE1 | 81 | add tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,300,899,000 | 1,589,355,837,000 | 1,589,355,836,000 | MEMBER | null | Tests for py_utils functions and for the BaseReader used to read from arrow and parquet.
I also removed unused utils functions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/81/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/81/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/81",
"html_url": "https://github.com/huggingface/datasets/pull/81",
"diff_url": "https://github.com/huggingface/datasets/pull/81.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/81.patch",
"merged_at": "2020-05-13T07:43:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/80 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/80/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/80/comments | https://api.github.com/repos/huggingface/datasets/issues/80/events | https://github.com/huggingface/datasets/pull/80 | 616,786,803 | MDExOlB1bGxSZXF1ZXN0NDE2ODMwNjk3 | 80 | Add nbytes + nexamples check | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good to me! Should we hard code those numbers in the config classes and make sure that when loading a dataset that the numbers match? "
] | 1,589,300,323,000 | 1,589,356,354,000 | 1,589,356,353,000 | MEMBER | null | ### Save size and number of examples
Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file.
This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example:
```
# Cached sizes: <full_config_name> <num_bytes> <num_examples>
hansards/house/1.0.0/test 22906629 122290
hansards/house/1.0.0/train 191459584 947969
hansards/senate/1.0.0/test 5711686 25553
hansards/senate/1.0.0/train 40324278 182135
```
### Check processing output
If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen.
### Fill Dataset Info
All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare`
### Check space on disk before running `download_and_prepare`
Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs.
TODO:
A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/80/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/80/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/80",
"html_url": "https://github.com/huggingface/datasets/pull/80",
"diff_url": "https://github.com/huggingface/datasets/pull/80.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/80.patch",
"merged_at": "2020-05-13T07:52:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/79 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/79/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/79/comments | https://api.github.com/repos/huggingface/datasets/issues/79/events | https://github.com/huggingface/datasets/pull/79 | 616,785,613 | MDExOlB1bGxSZXF1ZXN0NDE2ODI5NzMy | 79 | [Convert] add new pattern | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,300,211,000 | 1,589,300,230,000 | 1,589,300,229,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/79/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/79/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/79",
"html_url": "https://github.com/huggingface/datasets/pull/79",
"diff_url": "https://github.com/huggingface/datasets/pull/79.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/79.patch",
"merged_at": "2020-05-12T16:17:09"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/78 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/78/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/78/comments | https://api.github.com/repos/huggingface/datasets/issues/78/events | https://github.com/huggingface/datasets/pull/78 | 616,774,275 | MDExOlB1bGxSZXF1ZXN0NDE2ODIwNzU5 | 78 | [Tests] skip beam dataset tests for now | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@lhoestq - I moved the wkipedia file to the \"correct\" folder. ",
"Nice thanks !"
] | 1,589,299,258,000 | 1,589,300,184,000 | 1,589,300,182,000 | MEMBER | null | For now we will skip tests for Beam Datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/78/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/78/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/78",
"html_url": "https://github.com/huggingface/datasets/pull/78",
"diff_url": "https://github.com/huggingface/datasets/pull/78.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/78.patch",
"merged_at": "2020-05-12T16:16:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/77 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/77/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/77/comments | https://api.github.com/repos/huggingface/datasets/issues/77/events | https://github.com/huggingface/datasets/pull/77 | 616,674,601 | MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz | 77 | New datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,291,519,000 | 1,589,292,136,000 | 1,589,292,135,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/77/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/77/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/77",
"html_url": "https://github.com/huggingface/datasets/pull/77",
"diff_url": "https://github.com/huggingface/datasets/pull/77.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/77.patch",
"merged_at": "2020-05-12T14:02:15"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/76 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/76/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/76/comments | https://api.github.com/repos/huggingface/datasets/issues/76/events | https://github.com/huggingface/datasets/pull/76 | 616,579,228 | MDExOlB1bGxSZXF1ZXN0NDE2NjYyMTk2 | 76 | pin flake 8 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,282,729,000 | 1,589,282,855,000 | 1,589,282,854,000 | MEMBER | null | Flake 8's new version does not like our format. Pinning the version for now. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/76/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/76/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/76",
"html_url": "https://github.com/huggingface/datasets/pull/76",
"diff_url": "https://github.com/huggingface/datasets/pull/76.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/76.patch",
"merged_at": "2020-05-12T11:27:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/75 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/75/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/75/comments | https://api.github.com/repos/huggingface/datasets/issues/75/events | https://github.com/huggingface/datasets/pull/75 | 616,520,163 | MDExOlB1bGxSZXF1ZXN0NDE2NjE0MzU1 | 75 | WIP adding metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It's all about my metric stuff so I'll probably merge it unless you want to have a look.\r\n\r\nTook the occasion to remove the old doc and requirements.txt"
] | 1,589,277,120,000 | 1,589,355,852,000 | 1,589,355,850,000 | MEMBER | null | Adding the following metrics as identified by @mariamabarham:
1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual)
2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu
3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation)
4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual)
5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package)
6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval
7. SQuAD v1 evaluation script
8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/
9. GLUE
10. XNLI
Not now:
1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py
2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py
3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py
4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py
5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py
6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/75/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/75/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/75",
"html_url": "https://github.com/huggingface/datasets/pull/75",
"diff_url": "https://github.com/huggingface/datasets/pull/75.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/75.patch",
"merged_at": "2020-05-13T07:44:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/74 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/74/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/74/comments | https://api.github.com/repos/huggingface/datasets/issues/74/events | https://github.com/huggingface/datasets/pull/74 | 616,511,101 | MDExOlB1bGxSZXF1ZXN0NDE2NjA3MDcy | 74 | fix overflow check | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,276,281,000 | 1,589,277,879,000 | 1,589,277,878,000 | MEMBER | null | I did some tests and unfortunately the test
```
pa_array.nbytes > MAX_BATCH_BYTES
```
doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...).
I don't think we can do a proper overflow test for the limit of 2GB...
For now I replaced it with a sanity check on the first element. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/74/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/74/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/74",
"html_url": "https://github.com/huggingface/datasets/pull/74",
"diff_url": "https://github.com/huggingface/datasets/pull/74.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/74.patch",
"merged_at": "2020-05-12T10:04:37"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/73 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/73/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/73/comments | https://api.github.com/repos/huggingface/datasets/issues/73/events | https://github.com/huggingface/datasets/pull/73 | 616,417,845 | MDExOlB1bGxSZXF1ZXN0NDE2NTMyMTg1 | 73 | JSON script | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The tests for the Wikipedia dataset do not pass anymore with the error:\r\n```\r\nTo be able to use this dataset, you need to install the following dependencies ['mwparserfromhell'] using 'pip install mwparserfromhell' for instance'\r\n```",
"This was an issue on master. You can just rebase from master.",
"Perfect! Indeed, it worked^^ Thanks @lhoestq ",
"Currently the dummy_data tests are always green because in a PR the dataset is not yet synchronized with aws. This PR fixes this: https://github.com/huggingface/nlp/pull/140 . \r\n\r\nCould you test `json` locally or wait until the PR: https://github.com/huggingface/nlp/pull/140 is merged ? :-) ",
"Ok, I will wait #140 to be merged and then rebase :) "
] | 1,589,267,482,000 | 1,589,784,637,000 | 1,589,784,636,000 | CONTRIBUTOR | null | Add a JSONS script to read JSON datasets from files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/73/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/73/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/73",
"html_url": "https://github.com/huggingface/datasets/pull/73",
"diff_url": "https://github.com/huggingface/datasets/pull/73.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/73.patch",
"merged_at": "2020-05-18T06:50:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/72 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/72/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/72/comments | https://api.github.com/repos/huggingface/datasets/issues/72/events | https://github.com/huggingface/datasets/pull/72 | 616,225,010 | MDExOlB1bGxSZXF1ZXN0NDE2Mzc4Mjg4 | 72 | [README dummy data tests] README to better understand how the dummy data structure works | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,589,235,543,000 | 1,589,235,963,000 | 1,589,235,961,000 | MEMBER | null | In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the "edge cases".
@mariamabarham @thomwolf @lhoestq @jplu - I'd be happy to checkout the dummy data structure and get some feedback on possible improvements. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/72/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/72/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/72",
"html_url": "https://github.com/huggingface/datasets/pull/72",
"diff_url": "https://github.com/huggingface/datasets/pull/72.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/72.patch",
"merged_at": "2020-05-11T22:26:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/71 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/71/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/71/comments | https://api.github.com/repos/huggingface/datasets/issues/71/events | https://github.com/huggingface/datasets/pull/71 | 615,942,180 | MDExOlB1bGxSZXF1ZXN0NDE2MTUxODM4 | 71 | Fix arrow writer for big datasets using writer_batch_size | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"After a quick chat with Yacine : the 2Go test may not be sufficient actually, as I'm looking at the size of the array and not the size of the current_rows. If the test doesn't do the job I think I'll remove it and lower the batch size a bit to be sure that it never exceeds 2Go. I'll do more tests later"
] | 1,589,208,336,000 | 1,589,227,787,000 | 1,589,227,238,000 | MEMBER | null | This PR fixes Yacine's bug.
According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go.
Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/71/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/71/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/71",
"html_url": "https://github.com/huggingface/datasets/pull/71",
"diff_url": "https://github.com/huggingface/datasets/pull/71.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/71.patch",
"merged_at": "2020-05-11T20:00:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/70 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/70/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/70/comments | https://api.github.com/repos/huggingface/datasets/issues/70/events | https://github.com/huggingface/datasets/pull/70 | 615,679,102 | MDExOlB1bGxSZXF1ZXN0NDE1OTM3NDgw | 70 | adding RACE, QASC, Super_glue and Tiny_shakespear datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think rebasing to master will solve the quality test and the datasets that don't have a testing structure yet because of the manual download - maybe you can put them in `datasets under construction`? Then would also make it easier for me to see how to add tests for them :-) "
] | 1,589,184,469,000 | 1,589,289,712,000 | 1,589,289,711,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/70/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/70/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/70",
"html_url": "https://github.com/huggingface/datasets/pull/70",
"diff_url": "https://github.com/huggingface/datasets/pull/70.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/70.patch",
"merged_at": "2020-05-12T13:21:51"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/69 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/69/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/69/comments | https://api.github.com/repos/huggingface/datasets/issues/69/events | https://github.com/huggingface/datasets/pull/69 | 615,450,534 | MDExOlB1bGxSZXF1ZXN0NDE1NzYyNTQ4 | 69 | fix cache dir in builder tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Nice, is that the reason one cannot rerun the tests without deleting the cache? \r\n",
"Yes exactly. It was not using the temporary dir for tests."
] | 1,589,135,961,000 | 1,589,181,570,000 | 1,589,181,568,000 | MEMBER | null | minor fix | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/69/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/69/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/69",
"html_url": "https://github.com/huggingface/datasets/pull/69",
"diff_url": "https://github.com/huggingface/datasets/pull/69.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/69.patch",
"merged_at": "2020-05-11T07:19:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/68 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/68/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/68/comments | https://api.github.com/repos/huggingface/datasets/issues/68/events | https://github.com/huggingface/datasets/pull/68 | 614,882,655 | MDExOlB1bGxSZXF1ZXN0NDE1MzQ3NTgw | 68 | [CSV] re-add csv | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,959,509,000 | 1,588,959,648,000 | 1,588,959,646,000 | MEMBER | null | Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests.
@lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/68/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/68/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/68",
"html_url": "https://github.com/huggingface/datasets/pull/68",
"diff_url": "https://github.com/huggingface/datasets/pull/68.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/68.patch",
"merged_at": "2020-05-08T17:40:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/67 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/67/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/67/comments | https://api.github.com/repos/huggingface/datasets/issues/67/events | https://github.com/huggingface/datasets/pull/67 | 614,798,483 | MDExOlB1bGxSZXF1ZXN0NDE1Mjc5NjI0 | 67 | [Tests] Test files locally | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Super nice, good job @patrickvonplaten!"
] | 1,588,950,163,000 | 1,588,967,447,000 | 1,588,951,020,000 | MEMBER | null | This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets.
By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci.
**When local is activated all folders in `./datasets` are tested.**
**Important** When adding a dataset, we should no longer upload it to AWS. The steps are:
1. Open a PR
2. Add a dataset as described in `datasets/README.md`
3. If all tests pass, push to master
Currently we have 49 functional datasets in our code base.
We have 6 datasets "under-construction" that don't pass the tests - so I put them in a folder "datasets_under_construction" - it would be nice to open a PR to fix them and put them in the `datasets` folder.
**Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via:
`rm -r ~/.cache/huggingface/datasets/*`
@thomwolf @mariamabarham @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/67/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/67/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/67",
"html_url": "https://github.com/huggingface/datasets/pull/67",
"diff_url": "https://github.com/huggingface/datasets/pull/67.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/67.patch",
"merged_at": "2020-05-08T15:17:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/66 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/66/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/66/comments | https://api.github.com/repos/huggingface/datasets/issues/66/events | https://github.com/huggingface/datasets/pull/66 | 614,748,552 | MDExOlB1bGxSZXF1ZXN0NDE1MjM5Njgy | 66 | [Datasets] ReadME | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,945,063,000 | 1,588,945,163,000 | 1,588,945,162,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/66/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/66/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/66",
"html_url": "https://github.com/huggingface/datasets/pull/66",
"diff_url": "https://github.com/huggingface/datasets/pull/66.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/66.patch",
"merged_at": "2020-05-08T13:39:22"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/65 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/65/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/65/comments | https://api.github.com/repos/huggingface/datasets/issues/65/events | https://github.com/huggingface/datasets/pull/65 | 614,746,516 | MDExOlB1bGxSZXF1ZXN0NDE1MjM4MDEw | 65 | fix math dataset and xcopa | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,944,835,000 | 1,588,944,941,000 | 1,588,944,940,000 | MEMBER | null | - fixes math dataset and xcopa, uploaded both of the to S3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/65/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/65/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/65",
"html_url": "https://github.com/huggingface/datasets/pull/65",
"diff_url": "https://github.com/huggingface/datasets/pull/65.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/65.patch",
"merged_at": "2020-05-08T13:35:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/64 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/64/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/64/comments | https://api.github.com/repos/huggingface/datasets/issues/64/events | https://github.com/huggingface/datasets/pull/64 | 614,737,057 | MDExOlB1bGxSZXF1ZXN0NDE1MjMwMjYy | 64 | [Datasets] Make master ready for datasets adding | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,943,820,000 | 1,588,943,851,000 | 1,588,943,850,000 | MEMBER | null | Add all relevant files so that datasets can now be added on master | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/64/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/64/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/64",
"html_url": "https://github.com/huggingface/datasets/pull/64",
"diff_url": "https://github.com/huggingface/datasets/pull/64.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/64.patch",
"merged_at": "2020-05-08T13:17:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/63 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/63/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/63/comments | https://api.github.com/repos/huggingface/datasets/issues/63/events | https://github.com/huggingface/datasets/pull/63 | 614,666,365 | MDExOlB1bGxSZXF1ZXN0NDE1MTczODU5 | 63 | [Dataset scripts] add all datasets scripts | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,935,015,000 | 1,588,959,562,000 | 1,588,937,640,000 | MEMBER | null | As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes.
@mariamabarham @lhoestq @thomwolf - what do you think?
If this is ok for you, I can sync up the master with the `add_dataset` branch: https://github.com/huggingface/nlp/pull/37 so that master is up to date. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/63/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/63/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/63",
"html_url": "https://github.com/huggingface/datasets/pull/63",
"diff_url": "https://github.com/huggingface/datasets/pull/63.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/63.patch",
"merged_at": "2020-05-08T11:34:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/62 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/62/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/62/comments | https://api.github.com/repos/huggingface/datasets/issues/62/events | https://github.com/huggingface/datasets/pull/62 | 614,630,830 | MDExOlB1bGxSZXF1ZXN0NDE1MTQ1NDAx | 62 | [Cached Path] Better error message | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,930,787,000 | 1,588,931,147,000 | 1,588,931,147,000 | MEMBER | null | IMO returning `None` in this function only leads to confusion and is never helpful. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/62/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/62/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/62",
"html_url": "https://github.com/huggingface/datasets/pull/62",
"diff_url": "https://github.com/huggingface/datasets/pull/62.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/62.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/61 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/61/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/61/comments | https://api.github.com/repos/huggingface/datasets/issues/61/events | https://github.com/huggingface/datasets/pull/61 | 614,607,474 | MDExOlB1bGxSZXF1ZXN0NDE1MTI3MTU4 | 61 | [Load] rename setup_module to prepare_module | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,928,062,000 | 1,588,928,192,000 | 1,588,928,176,000 | MEMBER | null | rename setup_module to prepare_module due to issues with pytests `setup_module` function.
See: PR #59. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/61/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/61/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/61",
"html_url": "https://github.com/huggingface/datasets/pull/61",
"diff_url": "https://github.com/huggingface/datasets/pull/61.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/61.patch",
"merged_at": "2020-05-08T08:56:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/60 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/60/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/60/comments | https://api.github.com/repos/huggingface/datasets/issues/60/events | https://github.com/huggingface/datasets/pull/60 | 614,372,553 | MDExOlB1bGxSZXF1ZXN0NDE0OTQyNjEy | 60 | Update to simplify some datasets conversion | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome! ",
"Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)",
"> Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)\r\n\r\nWe should probably open a new PR about this",
"I think it might be a good idea to both change the supervised keys to a named tuple and also handle the translation features specifically.",
"Just noticed that `pyarrow` apparently does not have a `is_boolean` function. Or do I have the wrong `pyarrow` version? ",
"Ah, it was a typo `pa.types.is_boolean` is the correct name. Will fix in: https://github.com/huggingface/nlp/pull/59"
] | 1,588,888,944,000 | 1,588,934,312,000 | 1,588,933,104,000 | MEMBER | null | This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626
We could also change (not included in this PR yet):
- `supervized_keys` to make them a NamedTuple instead of a dataclass, and
- handle specifically the `Translation` features.
as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236
@patrickvonplaten @mariamabarham tell me if you want these two last changes as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/60/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/60/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/60",
"html_url": "https://github.com/huggingface/datasets/pull/60",
"diff_url": "https://github.com/huggingface/datasets/pull/60.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/60.patch",
"merged_at": "2020-05-08T10:18:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/59 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/59/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/59/comments | https://api.github.com/repos/huggingface/datasets/issues/59/events | https://github.com/huggingface/datasets/pull/59 | 614,366,045 | MDExOlB1bGxSZXF1ZXN0NDE0OTM3NTgx | 59 | Fix tests | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I can fix the tests tomorrow :-) ",
"Very weird bug indeed! I think the problem was that when importing `setup_module` we overwrote `pytest's` setup_module function. I think this is the relevant code in pytest: https://github.com/pytest-dev/pytest/blob/9d2eabb397b059b75b746259daeb20ee5588f559/src/_pytest/python.py#L460.",
"Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n\r\nI think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham \r\n\r\n",
"> Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n> \r\n> I think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham\r\n\r\nI think if it only needs a re-uploading, we can rename it, `DatasetBuilder.config` is easier and sounds better",
"Ok seems to be fine. Most tests work - merging."
] | 1,588,888,089,000 | 1,588,935,477,000 | 1,588,934,811,000 | MEMBER | null | @patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.
I'm trying to fix them here but I have a weird error, do you think you can have a look?
```bash
(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python
cachedir: .pytest_cache
rootdir: /Users/thomwolf/Documents/GitHub/datasets
plugins: xdist-1.31.0, forked-1.1.3
collected 1 item
tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR
=================================================================================== ERRORS ====================================================================================
____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________
file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'>
download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)
download_kwargs = {}
def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:
r"""
Download/extract/cache a dataset to add to the lib from a path or url which can be:
- a path to a local directory containing the dataset processing python script
- an url to a S3 directory with a dataset processing python script
Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)
and using cloudpickle (among other things).
Return: tuple of
the unique id associated to the dataset
the local path to the dataset
"""
if download_config is None:
download_config = DownloadConfig(**download_kwargs)
download_config.extract_compressed_file = True
download_config.force_extract = True
> name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py"
E AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
src/nlp/load.py:169: AttributeError
============================================================================== warnings summary ===============================================================================
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== short test summary info ===========================================================================
ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
========================================================================= 1 warning, 1 error in 3.63s =========================================================================
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/59/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/59/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/59",
"html_url": "https://github.com/huggingface/datasets/pull/59",
"diff_url": "https://github.com/huggingface/datasets/pull/59.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/59.patch",
"merged_at": "2020-05-08T10:46:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/58 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/58/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/58/comments | https://api.github.com/repos/huggingface/datasets/issues/58/events | https://github.com/huggingface/datasets/pull/58 | 614,362,308 | MDExOlB1bGxSZXF1ZXN0NDE0OTM0NTY4 | 58 | Aborted PR - Fix tests | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Wait I messed up my branch, let me clean this."
] | 1,588,887,619,000 | 1,588,888,081,000 | 1,588,887,687,000 | MEMBER | null | @patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.
I'm trying to fix them here but I have a weird error, do you think you can have a look?
```bash
(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python
cachedir: .pytest_cache
rootdir: /Users/thomwolf/Documents/GitHub/datasets
plugins: xdist-1.31.0, forked-1.1.3
collected 1 item
tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR
=================================================================================== ERRORS ====================================================================================
____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________
file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'>
download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)
download_kwargs = {}
def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:
r"""
Download/extract/cache a dataset to add to the lib from a path or url which can be:
- a path to a local directory containing the dataset processing python script
- an url to a S3 directory with a dataset processing python script
Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)
and using cloudpickle (among other things).
Return: tuple of
the unique id associated to the dataset
the local path to the dataset
"""
if download_config is None:
download_config = DownloadConfig(**download_kwargs)
download_config.extract_compressed_file = True
download_config.force_extract = True
> name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py"
E AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
src/nlp/load.py:169: AttributeError
============================================================================== warnings summary ===============================================================================
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== short test summary info ===========================================================================
ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
========================================================================= 1 warning, 1 error in 3.63s =========================================================================
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/58/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/58/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/58",
"html_url": "https://github.com/huggingface/datasets/pull/58",
"diff_url": "https://github.com/huggingface/datasets/pull/58.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/58.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/57 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/57/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/57/comments | https://api.github.com/repos/huggingface/datasets/issues/57/events | https://github.com/huggingface/datasets/pull/57 | 614,261,638 | MDExOlB1bGxSZXF1ZXN0NDE0ODUzMDM5 | 57 | Better cached path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I should have read this PR before doing my own: https://github.com/huggingface/nlp/pull/62 :D \r\nwill close mine. Looks great :-) ",
"> Awesome, this is really nice!\r\n> \r\n> By the way, we should improve the `cached_path` method of the `transformers` repo similarly, don't you think (@patrickvonplaten in particular).\r\n\r\nYeah, we should do the same in `transformers` I think - will note it down."
] | 1,588,876,560,000 | 1,588,944,030,000 | 1,588,944,028,000 | MEMBER | null | ### Changes:
- The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error)
- Fix requests to firebase API that doesn't handle HEAD requests...
- Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/57/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/57/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/57",
"html_url": "https://github.com/huggingface/datasets/pull/57",
"diff_url": "https://github.com/huggingface/datasets/pull/57.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/57.patch",
"merged_at": "2020-05-08T13:20:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/56 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/56/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/56/comments | https://api.github.com/repos/huggingface/datasets/issues/56/events | https://github.com/huggingface/datasets/pull/56 | 614,236,869 | MDExOlB1bGxSZXF1ZXN0NDE0ODMyODY4 | 56 | [Dataset] Tester add mock function | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,873,897,000 | 1,588,873,971,000 | 1,588,873,970,000 | MEMBER | null | need to add an empty `extract()` function to make `hansard` dataset test work. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/56/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/56/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/56",
"html_url": "https://github.com/huggingface/datasets/pull/56",
"diff_url": "https://github.com/huggingface/datasets/pull/56.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/56.patch",
"merged_at": "2020-05-07T17:52:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/55 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/55/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/55/comments | https://api.github.com/repos/huggingface/datasets/issues/55/events | https://github.com/huggingface/datasets/pull/55 | 613,968,072 | MDExOlB1bGxSZXF1ZXN0NDE0NjE0MjE1 | 55 | Beam datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Right now the changes are a bit hard to read as the one from #25 are also included. You can wait until #25 is merged before looking at the implementation details",
"Nice!! I tested it a bit and works quite well. I will do a my review once the #25 will be merged because there are several overlaps.\r\n\r\nAt least I can share my thoughts on your **Next** section:\r\n1) I don't think it is a good thing to rely on tfds preprocessed datasets uploaded in their online storage, because they might be updated or deleted at any moment by Google and then possibly break our own processing.\r\n2) Improves the pipeline is always a good direction, but in the meantime we might also share the preprocessed dataset in S3 storage. Which might be another way to see 1), instead of downloading Google preprocessed datasets, using our own ones.\r\n3) Apache Beam can be easily integrated in Spark, so I don't see the need to replace Beam by Spark.",
"Ok I've merged #25 so you can rebase or merge if you want.\r\n\r\nI fully agree with @jplu notes for the \"next section\".\r\n\r\nDon't hesitate to use some credit on Google Dataflow if you think it would be useful to give it a try.",
"Pr is ready for review !\r\n\r\nNew minor changes:\r\n- re-added the csv dataset builder (it was on my branch from #25 but disappeared from master)\r\n- move the csv script and the wikipedia script to \"under construction\" for now\r\n- some renaming in the `nlp-cli test` command"
] | 1,588,849,472,000 | 1,589,181,602,000 | 1,589,181,600,000 | MEMBER | null | # Beam datasets
## Intro
Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections).
The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are:
- the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine
- Google Dataflow. I didn't play with it.
- Spark or Flink, two well known data processing frameworks. I tried to use the Spark/Flink local runners provided by apache beam for python and wasn't able to make them work properly though...
## From tfds beam datasets to our own beam datasets
Tensorflow datasets used beam and a complicated pipeline to shard the TFRecords files.
To allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case).
On our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !)
Moreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it).
## Main changes
- Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos
- Create a ParquetReader and refactor a bit the arrow_reader.py
\> **With this, we can now try to add beam datasets from tfds**
I already added the wikipedia one, and I will also try to add the Wiki40b dataset
## Test the wikipedia script
You can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this:
```
>>> import nlp
>>> nlp.load("datasets/nlp/wikipedia", dataset_config="20200501.frr")
```
This wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol)
## Next
Should we allow to download preprocessed datasets from the tfds google storage ?
Should we try to optimize the beam pipelines to run locally without memory issues ?
Should we try other data processing frameworks for big datasets, like spark ?
## About this PR
It should be merged after #25
-----------------
I'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/55/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/55/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/55",
"html_url": "https://github.com/huggingface/datasets/pull/55",
"diff_url": "https://github.com/huggingface/datasets/pull/55.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/55.patch",
"merged_at": "2020-05-11T07:20:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/54 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/54/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/54/comments | https://api.github.com/repos/huggingface/datasets/issues/54/events | https://github.com/huggingface/datasets/pull/54 | 613,513,348 | MDExOlB1bGxSZXF1ZXN0NDE0MjUyODkw | 54 | [Tests] Improved Error message for dummy folder structure | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,788,708,000 | 1,588,788,780,000 | 1,588,788,779,000 | MEMBER | null | Improved Error message | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/54/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/54/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/54",
"html_url": "https://github.com/huggingface/datasets/pull/54",
"diff_url": "https://github.com/huggingface/datasets/pull/54.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/54.patch",
"merged_at": "2020-05-06T18:12:59"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/53 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/53/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/53/comments | https://api.github.com/repos/huggingface/datasets/issues/53/events | https://github.com/huggingface/datasets/pull/53 | 613,436,158 | MDExOlB1bGxSZXF1ZXN0NDE0MTkwMzkz | 53 | [Features] Typo in generate_from_dict | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,781,123,000 | 1,588,865,326,000 | 1,588,865,325,000 | MEMBER | null | Change `isinstance` test in features when generating features from dict. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/53/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/53/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/53",
"html_url": "https://github.com/huggingface/datasets/pull/53",
"diff_url": "https://github.com/huggingface/datasets/pull/53.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/53.patch",
"merged_at": "2020-05-07T15:28:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/52 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/52/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/52/comments | https://api.github.com/repos/huggingface/datasets/issues/52/events | https://github.com/huggingface/datasets/pull/52 | 613,339,071 | MDExOlB1bGxSZXF1ZXN0NDE0MTEyMDAy | 52 | allow dummy folder structure to handle dict of lists | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,773,275,000 | 1,588,773,319,000 | 1,588,773,318,000 | MEMBER | null | `esnli.py` needs that extension of the dummy data testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/52/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/52/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/52",
"html_url": "https://github.com/huggingface/datasets/pull/52",
"diff_url": "https://github.com/huggingface/datasets/pull/52.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/52.patch",
"merged_at": "2020-05-06T13:55:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/51 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/51/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/51/comments | https://api.github.com/repos/huggingface/datasets/issues/51/events | https://github.com/huggingface/datasets/pull/51 | 613,266,668 | MDExOlB1bGxSZXF1ZXN0NDE0MDUyOTYw | 51 | [Testing] Improved testing structure | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome!\r\nLet's have this in the doc at the end :-)"
] | 1,588,766,587,000 | 1,588,889,239,000 | 1,588,771,218,000 | MEMBER | null | This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class.
as @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp.
This PR tries to change that to some extent.
It follows the following logic for the `dummy` folder structure now:
1.) The data bulider has no config -> the `dummy` folder structure is:
`dummy/<version>/dummy_data.zip`
2) The data builder has >= 1 configs -> the `dummy` folder structure is:
`dummy/<config_name_1>/<version>/dummy_data.zip`
`dummy/<config_name_2>/<version>/dummy_data.zip`
Now, the difficult part is how to create the `dummy_data.zip` file. There are two cases:
A) The `data_urs` parameter inserted into the `download_and_extract` fn is a **string**:
-> the `dummy_data.zip` file zips the folder:
`dummy_data/<relative_path_of_folder_structure_of_url>`
B) The `data_urs` parameter inserted into the `download_and_extract` fn is a **dict**:
-> the `dummy_data.zip` file zips the folder:
`dummy_data/<relative_path_of_folder_structure_of_url_behind _key_1>`
`dummy_data/<relative_path_of_folder_structure_of_url_behind _key_2>`
By relative folder structure I mean `url_path.split('./')[-1]`. As an example the dataset **xquad** by deepmind has the following url path behind the key `de`: `https://github.com/deepmind/xquad/blob/master/xquad.de.json`
-> This means that the relative url path should be `xquad.de.json`.
@mariamabarham B) is a change from how is was before and I think is makes more sense.
While before the `dummy_data.zip` file for xquad with config `de` looked like:
`dummy_data/de` it would now look like `dummy_data/xquad.de.json`. I think this is better and easier to understand.
Therefore there are currently 6 tests that would have to have changed their dummy folder structure, but which can easily be done (30min).
I also added a function: `print_dummy_data_folder_structure` that prints out the expected structures when testing which should be quite helpful. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/51/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/51/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/51",
"html_url": "https://github.com/huggingface/datasets/pull/51",
"diff_url": "https://github.com/huggingface/datasets/pull/51.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/51.patch",
"merged_at": "2020-05-06T13:20:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/50 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/50/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/50/comments | https://api.github.com/repos/huggingface/datasets/issues/50/events | https://github.com/huggingface/datasets/pull/50 | 612,583,126 | MDExOlB1bGxSZXF1ZXN0NDEzNTAwMjE0 | 50 | [Tests] test only for fast test as a default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Test failure is not related to change in test file.\r\n"
] | 1,588,683,562,000 | 1,588,683,738,000 | 1,588,683,736,000 | MEMBER | null | Test only for one config on circle ci to speed up testing. Add all config test as a slow test.
@mariamabarham @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/50/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/50/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/50",
"html_url": "https://github.com/huggingface/datasets/pull/50",
"diff_url": "https://github.com/huggingface/datasets/pull/50.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/50.patch",
"merged_at": "2020-05-05T13:02:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/49 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/49/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/49/comments | https://api.github.com/repos/huggingface/datasets/issues/49/events | https://github.com/huggingface/datasets/pull/49 | 612,545,483 | MDExOlB1bGxSZXF1ZXN0NDEzNDY5ODg0 | 49 | fix flatten nested | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,679,713,000 | 1,588,687,166,000 | 1,588,687,165,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/49/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/49/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/49",
"html_url": "https://github.com/huggingface/datasets/pull/49",
"diff_url": "https://github.com/huggingface/datasets/pull/49.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/49.patch",
"merged_at": "2020-05-05T13:59:25"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/48 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/48/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/48/comments | https://api.github.com/repos/huggingface/datasets/issues/48/events | https://github.com/huggingface/datasets/pull/48 | 612,504,687 | MDExOlB1bGxSZXF1ZXN0NDEzNDM2MTgz | 48 | [Command Convert] remove tensorflow import | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,675,260,000 | 1,588,677,238,000 | 1,588,677,236,000 | MEMBER | null | Remove all tensorflow import statements. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/48/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/48/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/48",
"html_url": "https://github.com/huggingface/datasets/pull/48",
"diff_url": "https://github.com/huggingface/datasets/pull/48.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/48.patch",
"merged_at": "2020-05-05T11:13:56"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/47 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/47/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/47/comments | https://api.github.com/repos/huggingface/datasets/issues/47/events | https://github.com/huggingface/datasets/pull/47 | 612,446,493 | MDExOlB1bGxSZXF1ZXN0NDEzMzg5MDc1 | 47 | [PyArrow Feature] fix py arrow bool | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,668,988,000 | 1,588,675,228,000 | 1,588,675,227,000 | MEMBER | null | To me it seems that `bool` can only be accessed with `bool_` when looking at the pyarrow types: https://arrow.apache.org/docs/python/api/datatypes.html. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/47/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/47/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/47",
"html_url": "https://github.com/huggingface/datasets/pull/47",
"diff_url": "https://github.com/huggingface/datasets/pull/47.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/47.patch",
"merged_at": "2020-05-05T10:40:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/46 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/46/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/46/comments | https://api.github.com/repos/huggingface/datasets/issues/46/events | https://github.com/huggingface/datasets/pull/46 | 612,398,190 | MDExOlB1bGxSZXF1ZXN0NDEzMzUxNTY0 | 46 | [Features] Strip str key before dict look-up | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,663,905,000 | 1,588,667,865,000 | 1,588,667,864,000 | MEMBER | null | The dataset `anli.py` currently fails because it tries to look up a key `1\n` in a dict that only has the key `1`. Added an if statement to strip key if it cannot be found in dict. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/46/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/46/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/46",
"html_url": "https://github.com/huggingface/datasets/pull/46",
"diff_url": "https://github.com/huggingface/datasets/pull/46.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/46.patch",
"merged_at": "2020-05-05T08:37:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/45 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/45/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/45/comments | https://api.github.com/repos/huggingface/datasets/issues/45/events | https://github.com/huggingface/datasets/pull/45 | 612,386,583 | MDExOlB1bGxSZXF1ZXN0NDEzMzQzMjAy | 45 | [Load] Separate Module kwargs and builder kwargs. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,662,594,000 | 1,664,875,931,000 | 1,588,931,482,000 | MEMBER | null | Kwargs for the `load_module` fn should be passed with `module_xxxx` to `builder_kwargs` of `load` fn.
This is a follow-up PR of: https://github.com/huggingface/nlp/pull/41 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/45/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/45/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/45",
"html_url": "https://github.com/huggingface/datasets/pull/45",
"diff_url": "https://github.com/huggingface/datasets/pull/45.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/45.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/44 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/44/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/44/comments | https://api.github.com/repos/huggingface/datasets/issues/44/events | https://github.com/huggingface/datasets/pull/44 | 611,873,486 | MDExOlB1bGxSZXF1ZXN0NDEyOTUwMzU1 | 44 | [Tests] Fix tests for datasets with no config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,598,738,000 | 1,588,598,884,000 | 1,588,598,883,000 | MEMBER | null | Forgot to fix `None` problem for datasets that have no config this in PR: https://github.com/huggingface/nlp/pull/42 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/44/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/44/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/44",
"html_url": "https://github.com/huggingface/datasets/pull/44",
"diff_url": "https://github.com/huggingface/datasets/pull/44.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/44.patch",
"merged_at": "2020-05-04T13:28:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/43 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/43/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/43/comments | https://api.github.com/repos/huggingface/datasets/issues/43/events | https://github.com/huggingface/datasets/pull/43 | 611,773,279 | MDExOlB1bGxSZXF1ZXN0NDEyODcxNTE5 | 43 | [Checksums] If no configs exist prevent to run over empty list | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Whoops I fixed it directly on master before checking that you have done it in this PR. We may close it",
"Yeah, I saw :-) But I think we should add this as well since some datasets have an empty list of configs and then as the code is now it would fail. \r\n\r\nIn this PR, I just make sure that the code jumps in the correct else if \"there are no configs\" as is the case for some datasets @mariamabarham ",
"Sorry, I thought you meant a different commit . Just saw this one: https://github.com/huggingface/nlp/commit/7c644f284e2303b57612a6e7c904fe13906d893f\r\n.\r\n\r\nAll good then :-) "
] | 1,588,588,782,000 | 1,664,875,922,000 | 1,588,598,283,000 | MEMBER | null | `movie_rationales` e.g. has no configs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/43/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/43/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/43",
"html_url": "https://github.com/huggingface/datasets/pull/43",
"diff_url": "https://github.com/huggingface/datasets/pull/43.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/43.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/42 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/42/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/42/comments | https://api.github.com/repos/huggingface/datasets/issues/42/events | https://github.com/huggingface/datasets/pull/42 | 611,754,343 | MDExOlB1bGxSZXF1ZXN0NDEyODU1OTE2 | 42 | [Tests] allow tests for builders without config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,586,782,000 | 1,588,597,850,000 | 1,588,597,848,000 | MEMBER | null | Some dataset scripts have no configs - the tests have to be adapted for this case.
In this case the dummy data will be saved as:
- natural_questions
-> dummy
-> -> 1.0.0 (version num)
-> -> -> dummy_data.zip
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/42/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/42/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/42",
"html_url": "https://github.com/huggingface/datasets/pull/42",
"diff_url": "https://github.com/huggingface/datasets/pull/42.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/42.patch",
"merged_at": "2020-05-04T13:10:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/41 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/41/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/41/comments | https://api.github.com/repos/huggingface/datasets/issues/41/events | https://github.com/huggingface/datasets/pull/41 | 611,739,219 | MDExOlB1bGxSZXF1ZXN0NDEyODQzNDQy | 41 | [Load module] allow kwargs into load module | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,585,331,000 | 1,588,621,147,000 | 1,588,621,146,000 | MEMBER | null | Currenly it is not possible to force a re-download of the dataset script.
This simple change allows to pass ``force_reload=True`` as ``builder_kwargs`` in the ``load.py`` function. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/41/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/41/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/41",
"html_url": "https://github.com/huggingface/datasets/pull/41",
"diff_url": "https://github.com/huggingface/datasets/pull/41.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/41.patch",
"merged_at": "2020-05-04T19:39:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/40 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/40/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/40/comments | https://api.github.com/repos/huggingface/datasets/issues/40/events | https://github.com/huggingface/datasets/pull/40 | 611,721,308 | MDExOlB1bGxSZXF1ZXN0NDEyODI4NzU2 | 40 | Update remote checksums instead of overwrite | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,583,594,000 | 1,588,593,111,000 | 1,588,593,109,000 | MEMBER | null | When the user uploads a dataset on S3, checksums are also uploaded with the `--upload_checksums` parameter.
If the user uploads the dataset in several steps, then the remote checksums file was previously overwritten. Now it's going to be updated with the new checksums. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/40/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/40/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/40",
"html_url": "https://github.com/huggingface/datasets/pull/40",
"diff_url": "https://github.com/huggingface/datasets/pull/40.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/40.patch",
"merged_at": "2020-05-04T11:51:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/39 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/39/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/39/comments | https://api.github.com/repos/huggingface/datasets/issues/39/events | https://github.com/huggingface/datasets/pull/39 | 611,712,135 | MDExOlB1bGxSZXF1ZXN0NDEyODIxNTA4 | 39 | [Test] improve slow testing | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,582,713,000 | 1,588,582,790,000 | 1,588,582,789,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/39/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/39/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/39",
"html_url": "https://github.com/huggingface/datasets/pull/39",
"diff_url": "https://github.com/huggingface/datasets/pull/39.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/39.patch",
"merged_at": "2020-05-04T08:59:49"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/38 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/38/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/38/comments | https://api.github.com/repos/huggingface/datasets/issues/38/events | https://github.com/huggingface/datasets/issues/38 | 611,677,656 | MDU6SXNzdWU2MTE2Nzc2NTY= | 38 | [Checksums] Error for some datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | [
"@lhoestq - could you take a look? It's not very urgent though!",
"Fixed with 06882b4\r\n\r\nNow your command works :)\r\nNote that you can also do\r\n```\r\nnlp-cli test datasets/nlp/xnli --save_checksums\r\n```\r\nSo that it will save the checksums directly in the right directory.",
"Awesome!"
] | 1,588,579,216,000 | 1,588,585,700,000 | 1,588,585,700,000 | MEMBER | null | The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`,
the same bug happens:
When running:
```
python nlp-cli nlp-cli test xnli --save_checksums
```
leads to:
```
File "nlp-cli", line 33, in <module>
service.run()
File "/home/patrick/python_bin/nlp/commands/test.py", line 61, in run
ignore_checksums=self._ignore_checksums,
File "/home/patrick/python_bin/nlp/builder.py", line 383, in download_and_prepare
self._download_and_prepare(dl_manager=dl_manager, download_config=download_config)
File "/home/patrick/python_bin/nlp/builder.py", line 627, in _download_and_prepare
dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split,
File "/home/patrick/python_bin/nlp/builder.py", line 431, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/patrick/python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py", line 95, in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 246, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 186, in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
File "/home/patrick/python_bin/nlp/utils/download_manager.py", line 166, in _record_sizes_checksums
self._recorded_sizes_checksums[url] = get_size_checksum(path)
File "/home/patrick/python_bin/nlp/utils/checksums_utils.py", line 81, in get_size_checksum
with open(path, "rb") as f:
TypeError: expected str, bytes or os.PathLike object, not tuple
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/38/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/38/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/37 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/37/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/37/comments | https://api.github.com/repos/huggingface/datasets/issues/37/events | https://github.com/huggingface/datasets/pull/37 | 611,670,295 | MDExOlB1bGxSZXF1ZXN0NDEyNzg5MjQ4 | 37 | [Datasets ToDo-List] add datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
}
] | [
"Note:\r\n```\r\nnlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs\r\n```\r\ndirectly saves the checksums in the right place, and runs for all the dataset configurations.",
"@patrickvonplaten can you provide the add the link to the PR for the dummy data? ",
"https://github.com/huggingface/nlp/pull/15 - But it's probably best to checkout into this branch and look how the dummy data strtucture is for `squad` for example.",
"are lock files supposed to stay ?",
"> are lock files supposed to stay ?\r\n\r\nNot sure! I think the checksum command creates them, so I just uploaded them as well.",
"We can trash the `lock` file, they are dummy file that are only used to avoid concurrent access when the library is run.\r\nYou can read the filelock readme and code, it's a very simple single-file library: https://github.com/benediktschmitt/py-filelock",
"The testing design was slightly changed as explained in https://github.com/huggingface/nlp/pull/51 . \r\nIf creating the dummy folder is too confusing it helps to upload everything else to AWS, then run the test and check the INFO when testing on how to create the dummy folder structure.",
"Closing because we can now work on master"
] | 1,588,578,459,000 | 1,664,875,937,000 | 1,588,945,703,000 | MEMBER | null | ## Description
This PR acts as a dashboard to see which datasets are added to the library and work.
Cicle-ci should always be green so that we can be sure that newly added datasets are functional.
This PR should not be merged.
## Progress
**For the following datasets the test commands**:
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name>
```
and
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name>
```
**passes**.
- [x] Squad
- [x] Sentiment140
- [x] XNLI
- [x] Crime_and_Punish
- [x] movie_rationales
- [x] ai2_arc
- [x] anli
- [x] event2Mind
- [x] Fquad
- [x] blimp
- [x] empathetic_dialogues
- [x] cosmos_qa
- [x] xquad
- [x] blog_authorship_corpus
- [x] SNLI
- [x] break_data
- [x] SQuAD v2
- [x] cfq
- [x] eraser_multi_rc
- [x] Glue
- [x] Tydiqa
- [x] wiki_qa
- [x] wikitext
- [x] winogrande
- [x] wiqa
- [x] esnli
- [x] civil_comments
- [x] commonsense_qa
- [x] com_qa
- [x] coqa
- [x] wiki_split
- [x] cos_e
- [x] xcopa
- [x] quarel
- [x] quartz
- [x] squad_it
- [x] quoref
- [x] squad_pt
- [x] cornell_movie_dialog
- [x] SciQ
- [x] Scifact
- [x] hellaswag
- [x] ted_multi (in translate)
- [x] Aeslc (summarization)
- [x] drop
- [x] gap
- [x] hansard
- [x] opinosis
- [x] MLQA
- [x] math_dataset
## How-To-Add a dataset
**Before adding a dataset make sure that your branch is up to date**:
1. `git checkout add_datasets`
2. `git pull`
**Add a dataset via the `convert_dataset.sh` bash script:**
Running `bash convert_dataset.sh <file/to/tfds/datascript.py>` (*e.g.* `bash convert_dataset.sh ../tensorflow-datasets/tensorflow_datasets/text/movie_rationales.py`) will automatically run all the steps mentioned in **Add a dataset manually** below.
Make sure that you run `convert_dataset.sh` from the root folder of `nlp`.
The conversion script should work almost always for step 1): "convert dataset script from tfds to nlp format" and 2) "create checksum file" and step 3) "make style".
It can also sometimes automatically run step 4) "create the correct dummy data from tfds", but this will only work if a) there is either no config name or only one config name and b) the `tfds testing/test_data/fake_example` is in the correct form.
Nevertheless, the script should always be run in the beginning until an error occurs to be more efficient.
If the conversion script does not work or fails at some step, then you can run the steps manually as follows:
**Add a dataset manually**
Make sure you run all of the following commands from the root of your `nlp` git clone.
Also make sure that you changed to this branch:
```
git checkout add_datasets
```
1) the tfds datascript file should be converted to `nlp` style:
```
python nlp-cli convert --tfds_path <path/to/tensorflow_datasets/text/your_dataset_name>.py --nlp_directory datasets/nlp
```
This will convert the tdfs script and create a folder with the correct name.
2) the checksum file should be added. Use the command:
```
python nlp-cli test datasets/nlp/<your-dataset-folder> --save_checksums --all_configs
```
A checksums.txt file should be created in your folder and the structure should look as follows:
squad/
βββ squad.py/
βββ urls_checksums/
...........βββ checksums.txt
Delete the created `*.lock` file afterward - it should not be uploaded to AWS.
3) run black and isort on your newly added datascript files so that they look nice:
```
make style
```
4) the dummy data should be added. For this it might be useful to take a look into the structure of other examples as shown in the PR here and at `<path/to/tensorflow_datasets/testing/test_data/test_data/fake_examples>` whether the same data can be used.
5) the data can be uploaded to AWS using the command
```
aws s3 cp datasets/nlp/<your-dataset-folder> s3://datasets.huggingface.co/nlp/<your-dataset-folder> --recursive
```
6) check whether all works as expected using:
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-name>
```
and
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_<your-dataset-name>
```
7) push to this PR and rerun the circle ci workflow to check whether circle ci stays green.
8) Edit this commend and tick off your newly added dataset :-)
## TODO-list
Maybe we can add a TODO-list here for everybody that feels like adding new datasets so that we will not add the same datasets.
Here a link to available datasets: https://docs.google.com/spreadsheets/d/1zOtEqOrnVQwdgkC4nJrTY6d-Av02u0XFzeKAtBM2fUI/edit#gid=0
Patrick:
- [ ] boolq - *weird download link*
- [ ] c4 - *beam dataset* | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/37/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/37/timeline | null | null | 1 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/37",
"html_url": "https://github.com/huggingface/datasets/pull/37",
"diff_url": "https://github.com/huggingface/datasets/pull/37.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/37.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/36 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/36/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/36/comments | https://api.github.com/repos/huggingface/datasets/issues/36/events | https://github.com/huggingface/datasets/pull/36 | 611,528,349 | MDExOlB1bGxSZXF1ZXN0NDEyNjgwOTk1 | 36 | Metrics - refactoring, adding support for download and distributed metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, this one seems to be ready to merge.",
"> Really cool, I love it! I would just raise a tiny point, the distributive version of the metrics might not work properly with TF because it is a different way to do, why not to add a \"framework\" detection and raise warning when TF is used, saying something like \"not available yet in TF switch to non distributive metric computation\".\r\n> \r\n> What do you think?\r\n\r\nGood point @jplu I'm not sure how you should do distributed metrics evaluation for TF.\r\nThere is only one python script, right?\r\nMaybe it's just the same as in the not-distributed case?",
"I think non-distributed case should work in TF for both cases indeed, but this needs to be tested."
] | 1,588,546,817,000 | 1,589,184,962,000 | 1,589,184,960,000 | MEMBER | null | Refactoring metrics to have a similar loading API than the datasets and improving the import system.
# Import system
The import system has ben upgraded. There are now three types of imports allowed:
1. `library` imports (identified as "absolute imports")
```python
import seqeval
```
=> we'll test all the imports before running the scripts and if one cannot be imported we'll display an error message like this one:
`ImportError: To be able to use this metric/dataset, you need to install the following dependencies ['seqeval'] using 'pip install seqeval' for instance'`
2. `internal` imports (identified as "relative imports")
```python
import .c4_utils
```
=> we'll assume this point to a file in the same directory/S3-directory as the main script and download this file.
2. `external` imports (identified as "relative imports" with a comment starting with `# From:`)
```python
from .nmt_bleu import compute_bleu # From: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py
```
=> we'll assume this point to the URL of a python script (if it's a link to a github file, we'll take the raw file automatically).
=> the script is downloaded and renamed to the import name (here above renamed from `bleu.py` to `nmt_bleu.py`). Renaming the file can be necessary if the distant file has the same name as the dataset/metric processing script. If you forgot to rename the distant script and it has the same name as the dataset/metric, you'll have an explicit error message asking to rename the import anyway.
# Hosting metrics
Metrics are hosted on a S3 bucket like the dataset processing scripts.
# Metrics scripts
Metrics scripts have a lot in common with datasets processing scripts. They also have a `metric.info` including citations, descriptions and links to relevant pages.
Metrics have more documentation to supply to ensure they are used well.
Four examples are already included for reference in [./metrics](./metrics): BLEU, ROUGE, SacreBLEU and SeqEVAL.
# Automatic support for distributed/multi-processing metric computation
We've also added support for automatic distributed/multi-processing metric computation (e.g. when using DistributedDataParallel). We leverage our own dataset format for smart caching in this case.
Here is a quick gist of a standard use of metrics (the simplest usage):
```python
import nlp
bleu_metric = nlp.load_metric('bleu')
# If you only have a single iteration, you can easily compute the score like this
predictions = model(inputs)
score = bleu_metric.compute(predictions, references)
# If you have a loop, you can "add" your predictions and references at each iteration instead of having to save them yourself (the metric object store them efficiently for you)
for batch in dataloader:
model_input, targets = batch
predictions = model(model_inputs)
bleu.add(predictions, targets)
score = bleu_metric.compute() # Compute the score from all the stored predictions/references
```
Here is a quick gist of a use in a distributed torch setup (should work for any python multi-process setup actually). It's pretty much identical to the second example above:
```python
import nlp
# You need to give the total number of parallel python processes (num_process) and the id of each process (process_id)
bleu = nlp.load_metric('bleu', process_id=torch.distributed.get_rank(),b num_process=torch.distributed.get_world_size())
for batch in dataloader:
model_input, targets = batch
predictions = model(model_inputs)
bleu.add(predictions, targets)
score = bleu_metric.compute() # Compute the score on the first node by default (can be set to compute on each node as well)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/36/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/36/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/36",
"html_url": "https://github.com/huggingface/datasets/pull/36",
"diff_url": "https://github.com/huggingface/datasets/pull/36.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/36.patch",
"merged_at": "2020-05-11T08:16:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/35 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/35/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/35/comments | https://api.github.com/repos/huggingface/datasets/issues/35/events | https://github.com/huggingface/datasets/pull/35 | 611,413,731 | MDExOlB1bGxSZXF1ZXN0NDEyNjAyMTc0 | 35 | [Tests] fix typo | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,512,229,000 | 1,588,512,261,000 | 1,588,512,260,000 | MEMBER | null | @lhoestq - currently the slow test fail with:
```
_____________________________________________________________________________________ DatasetTest.test_load_real_dataset_xnli _____________________________________________________________________________________
self = <tests.test_dataset_common.DatasetTest testMethod=test_load_real_dataset_xnli>, dataset_name = 'xnli'
@slow
def test_load_real_dataset(self, dataset_name):
with tempfile.TemporaryDirectory() as temp_data_dir:
> dataset = load(dataset_name, data_dir=temp_data_dir)
tests/test_dataset_common.py:153:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../python_bin/nlp/load.py:497: in load
dbuilder.download_and_prepare(**download_and_prepare_kwargs)
../../python_bin/nlp/builder.py:383: in download_and_prepare
self._download_and_prepare(dl_manager=dl_manager, download_config=download_config)
../../python_bin/nlp/builder.py:627: in _download_and_prepare
dl_manager=dl_manager, max_examples_per_split=download_config.max_examples_per_split,
../../python_bin/nlp/builder.py:431: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
../../python_bin/nlp/datasets/xnli/8bf4185a2da1ef2a523186dd660d9adcf0946189e7fa5942ea31c63c07b68a7f/xnli.py:95: in _split_generators
dl_dir = dl_manager.download_and_extract(_DATA_URL)
../../python_bin/nlp/utils/download_manager.py:246: in download_and_extract
return self.extract(self.download(url_or_urls))
../../python_bin/nlp/utils/download_manager.py:186: in download
self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)
../../python_bin/nlp/utils/download_manager.py:166: in _record_sizes_checksums
self._recorded_sizes_checksums[url] = get_size_checksum(path)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = ('', '/tmp/tmpkajlg9yc/downloads/c0f7773c480a3f2d85639d777e0e17e65527460310d80760fd3fc2b2f2960556.c952a63cb17d3d46e412ceb7dbcd656ce2b15cc9ef17f50c28f81c48a7c853b5')
def get_size_checksum(path: str) -> Tuple[int, str]:
"""Compute the file size and the sha256 checksum of a file"""
m = sha256()
> with open(path, "rb") as f:
E TypeError: expected str, bytes or os.PathLike object, not tuple
../../python_bin/nlp/utils/checksums_utils.py:81: TypeError
```
- the checksums probably need to be updated no? And we should also think about how to write a test for the checksums. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/35/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/35/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/35",
"html_url": "https://github.com/huggingface/datasets/pull/35",
"diff_url": "https://github.com/huggingface/datasets/pull/35.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/35.patch",
"merged_at": "2020-05-03T13:24:20"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/34 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/34/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/34/comments | https://api.github.com/repos/huggingface/datasets/issues/34/events | https://github.com/huggingface/datasets/pull/34 | 611,385,516 | MDExOlB1bGxSZXF1ZXN0NDEyNTg0OTM0 | 34 | [Tests] add slow tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,503,682,000 | 1,588,508,310,000 | 1,588,508,309,000 | MEMBER | null | This PR adds a slow test that downloads the "real" dataset. The test is decorated as "slow" so that it will not automatically run on circle ci.
Before uploading a dataset, one should test that this test passes, manually by running
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_<your-dataset-script-name>
```
This PR should be merged after PR: #33 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/34/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/34/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/34",
"html_url": "https://github.com/huggingface/datasets/pull/34",
"diff_url": "https://github.com/huggingface/datasets/pull/34.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/34.patch",
"merged_at": "2020-05-03T12:18:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/33 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/33/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/33/comments | https://api.github.com/repos/huggingface/datasets/issues/33/events | https://github.com/huggingface/datasets/pull/33 | 611,052,081 | MDExOlB1bGxSZXF1ZXN0NDEyMzU1ODE0 | 33 | Big cleanup/refactoring for clean serialization | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Great! I think when this merged, we can merge sure that Circle Ci stays happy when uploading new datasets. "
] | 1,588,376,757,000 | 1,588,508,254,000 | 1,588,508,253,000 | MEMBER | null | This PR cleans many base classes to re-build them as `dataclasses`. We can thus use a simple serialization workflow for `DatasetInfo`, including it's `Features` and `SplitDict` based on `dataclasses` `asdict()`.
The resulting code is a lot shorter, can be easily serialized/deserialized, dataset info are human-readable and we can get rid of the `dataclass_json` dependency.
The scripts have breaking changes and the conversion tool is updated.
Example of dataset info in SQuAD script now:
```python
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features({
"id":
nlp.Value('string'),
"title":
nlp.Value('string'),
"context":
nlp.Value('string'),
"question":
nlp.Value('string'),
"answers":
nlp.Sequence({
"text": nlp.Value('string'),
"answer_start": nlp.Value('int32'),
}),
}),
# No default supervised_keys (as we have to pass both question
# and context as input).
supervised_keys=None,
homepage="https://rajpurkar.github.io/SQuAD-explorer/",
citation=_CITATION,
)
```
Example of serialized dataset info:
```bash
{
"description": "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n",
"citation": "@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\n Konstantin and {Liang}, Percy},\n title = \"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\",\n journal = {arXiv e-prints},\n year = 2016,\n eid = {arXiv:1606.05250},\n pages = {arXiv:1606.05250},\narchivePrefix = {arXiv},\n eprint = {1606.05250},\n}\n",
"homepage": "https://rajpurkar.github.io/SQuAD-explorer/",
"license": "",
"features": {
"id": {
"dtype": "string",
"_type": "Value"
},
"title": {
"dtype": "string",
"_type": "Value"
},
"context": {
"dtype": "string",
"_type": "Value"
},
"question": {
"dtype": "string",
"_type": "Value"
},
"answers": {
"feature": {
"text": {
"dtype": "string",
"_type": "Value"
},
"answer_start": {
"dtype": "int32",
"_type": "Value"
}
},
"length": -1,
"_type": "Sequence"
}
},
"supervised_keys": null,
"name": "squad",
"version": {
"version_str": "1.0.0",
"description": "New split API (https://tensorflow.org/datasets/splits)",
"nlp_version_to_prepare": null,
"major": 1,
"minor": 0,
"patch": 0
},
"splits": {
"train": {
"name": "train",
"num_bytes": 79426386,
"num_examples": 87599,
"dataset_name": "squad"
},
"validation": {
"name": "validation",
"num_bytes": 10491883,
"num_examples": 10570,
"dataset_name": "squad"
}
},
"size_in_bytes": 0,
"download_size": 35142551,
"download_checksums": []
}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/33/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/33/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/33",
"html_url": "https://github.com/huggingface/datasets/pull/33",
"diff_url": "https://github.com/huggingface/datasets/pull/33.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/33.patch",
"merged_at": "2020-05-03T12:17:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/32 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/32/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/32/comments | https://api.github.com/repos/huggingface/datasets/issues/32/events | https://github.com/huggingface/datasets/pull/32 | 610,715,580 | MDExOlB1bGxSZXF1ZXN0NDEyMTAzMzIx | 32 | Fix map caching notebooks | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,334,126,000 | 1,588,508,158,000 | 1,588,508,157,000 | MEMBER | null | Previously, caching results with `.map()` didn't work in notebooks.
To reuse a result, `.map()` serializes the functions with `dill.dumps` and then it hashes it.
The problem is that when using `dill.dumps` to serialize a function, it also saves its origin (filename + line no.) and the origin of all the `globals` this function needs. However for notebooks and shells, the filename looks like \<ipython-input-13-9ed2afe61d25\> and the line no. changes often.
To fix the problem, I added a new dispatch function for code objects that ignore the origin of the code if it comes from a notebook or a python shell.
I tested these cases in a notebook:
- lambda functions
- named functions
- methods
- classmethods
- staticmethods
- classes that implement `__call__`
The caching now works as expected for all of them :)
I also tested the caching in the demo notebook and it works fine ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/32/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/32/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/32",
"html_url": "https://github.com/huggingface/datasets/pull/32",
"diff_url": "https://github.com/huggingface/datasets/pull/32.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/32.patch",
"merged_at": "2020-05-03T12:15:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/31 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/31/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/31/comments | https://api.github.com/repos/huggingface/datasets/issues/31/events | https://github.com/huggingface/datasets/pull/31 | 610,677,641 | MDExOlB1bGxSZXF1ZXN0NDEyMDczNDE4 | 31 | [Circle ci] Install a virtual env before running tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,327,877,000 | 1,588,370,776,000 | 1,588,370,775,000 | MEMBER | null | Install a virtual env before running tests to not running into sudo issues when dynamically downloading files.
Same number of tests now pass / fail as on my local computer:
![Screenshot from 2020-05-01 12-14-44](https://user-images.githubusercontent.com/23423619/80798814-8a0a0a80-8ba5-11ea-8db8-599d33bbfccd.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/31/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/31/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/31",
"html_url": "https://github.com/huggingface/datasets/pull/31",
"diff_url": "https://github.com/huggingface/datasets/pull/31.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/31.patch",
"merged_at": "2020-05-01T22:06:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/30 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/30/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/30/comments | https://api.github.com/repos/huggingface/datasets/issues/30/events | https://github.com/huggingface/datasets/pull/30 | 610,549,072 | MDExOlB1bGxSZXF1ZXN0NDExOTY4Mzk3 | 30 | add metrics which require download files from github | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,306,402,000 | 1,664,875,918,000 | 1,589,185,194,000 | CONTRIBUTOR | null | To download files from github, I copied the `load_dataset_module` and its dependencies (without the builder) in `load.py` to `metrics/metric_utils.py`. I made the following changes:
- copy the needed files in a folder`metric_name`
- delete all other files that are not needed
For metrics that require an external import, I first create a `<metric_name>_imports.py` file which contains all external urls. Then I create a `<metric_name>.py` in which I will load the external files using `<metric_name>_imports.py` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/30/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/30/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/30",
"html_url": "https://github.com/huggingface/datasets/pull/30",
"diff_url": "https://github.com/huggingface/datasets/pull/30.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/30.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/29 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/29/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/29/comments | https://api.github.com/repos/huggingface/datasets/issues/29/events | https://github.com/huggingface/datasets/pull/29 | 610,243,997 | MDExOlB1bGxSZXF1ZXN0NDExNzIwODMx | 29 | Hf_api small changes | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok merging! I think it's good now"
] | 1,588,266,403,000 | 1,588,276,305,000 | 1,588,276,304,000 | MEMBER | null | From Patrick:
```python
from nlp import hf_api
api = hf_api.HfApi()
api.dataset_list()
```
works :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/29/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/29/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/29",
"html_url": "https://github.com/huggingface/datasets/pull/29",
"diff_url": "https://github.com/huggingface/datasets/pull/29.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/29.patch",
"merged_at": "2020-04-30T19:51:44"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/28 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/28/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/28/comments | https://api.github.com/repos/huggingface/datasets/issues/28/events | https://github.com/huggingface/datasets/pull/28 | 610,241,907 | MDExOlB1bGxSZXF1ZXN0NDExNzE5MTQy | 28 | [Circle ci] Adds circle ci config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,266,215,000 | 1,588,276,269,000 | 1,588,276,268,000 | MEMBER | null | @thomwolf can you take a look and set up circle ci on:
https://app.circleci.com/projects/project-dashboard/github/huggingface
I think for `nlp` only admins can set it up, which I guess is you :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/28/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/28/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/28",
"html_url": "https://github.com/huggingface/datasets/pull/28",
"diff_url": "https://github.com/huggingface/datasets/pull/28.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/28.patch",
"merged_at": "2020-04-30T19:51:08"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/27 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/27/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/27/comments | https://api.github.com/repos/huggingface/datasets/issues/27/events | https://github.com/huggingface/datasets/pull/27 | 610,230,476 | MDExOlB1bGxSZXF1ZXN0NDExNzA5OTc0 | 27 | [Cleanup] Removes all files in testing except test_dataset_common | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,265,121,000 | 1,588,268,365,000 | 1,588,268,363,000 | MEMBER | null | As far as I know, all files in `tests` were old `tfds test files` so I removed them. We can still look them up on the other library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/27/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/27/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/27",
"html_url": "https://github.com/huggingface/datasets/pull/27",
"diff_url": "https://github.com/huggingface/datasets/pull/27.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/27.patch",
"merged_at": "2020-04-30T17:39:23"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/26 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/26/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/26/comments | https://api.github.com/repos/huggingface/datasets/issues/26/events | https://github.com/huggingface/datasets/pull/26 | 610,226,047 | MDExOlB1bGxSZXF1ZXN0NDExNzA2NjA2 | 26 | [Tests] Clean tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,264,709,000 | 1,588,277,524,000 | 1,588,277,523,000 | MEMBER | null | the abseil testing library (https://abseil.io/docs/python/quickstart.html) is better than the one I had before, so I decided to switch to that and changed the `setup.py` config file.
Abseil has more support and a cleaner API for parametrized testing I think.
I added a list of all dataset scripts that are currently on AWS, but will replace that once the
API is integrated into this lib.
One can now easily test for just a single function for a single dataset with:
`tests/test_dataset_common.py::DatasetTest::test_load_dataset_wikipedia`
NOTE: This PR is rebased on PR #29 so should be merged after. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/26/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/26/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/26",
"html_url": "https://github.com/huggingface/datasets/pull/26",
"diff_url": "https://github.com/huggingface/datasets/pull/26.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/26.patch",
"merged_at": "2020-04-30T20:12:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/25 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/25/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/25/comments | https://api.github.com/repos/huggingface/datasets/issues/25/events | https://github.com/huggingface/datasets/pull/25 | 609,708,863 | MDExOlB1bGxSZXF1ZXN0NDExMjQ4Nzg2 | 25 | Add script csv datasets | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Very interesting thoughts, we should think deeper about all what you raised indeed.",
"Ok here is a proposal for a more general API and workflow.\r\n\r\n# New `ArrowBasedBuilder`\r\n\r\nFor all the formats that can be directly and efficiently loaded by Arrow (CSV, JSON, Parquet, Arrow), we don't really want to have to go through a conversion to python and back to Arrow. This new builder has a `_generate_tables` method to yield `Arrow.Tables` instead of single examples.\r\nThe tables can be directly casted in Arrow so it's not necessary to supply `Features`, they can be deduced from the `Table` column.\r\n\r\n# Central role of the `BuilderConfig` to store all the arguments necessary for the Dataset creation.\r\n \r\n`BuilderConfig` provide a few defaults fields `name`, `version`, `description`, `data_files` and `data_dir` which can be used to store values necessary for the creation of the dataset. It can be freely extended to store additional information (see the example for `CsvConfig`).\r\n\r\nOn the contrary, `DatasetInfo` is designed as an organized and delimited information storage class with predefined fields.\r\n\r\n`DatasetInfo` now store two names:\r\n- `builder_name`: Name of the builder script used to create the dataset\r\n- `config_name`: Name of the configuration used to create the dataset.\r\n\r\n# Refactoring `load()` arguments and all the chain of processing including the `DownloadManager`\r\n\r\n`load()` now accept a selection of arguments which are used to update the `BuilderConfig` and some kwargs which are used to handle the download process.\r\n\r\nSupplying a `BuilderConfig` as `config` will override the config provided in the dataset. Supplying a `str` will get the associated config from the dataset. Default is to fetch the first config of the dataset.\r\n\r\nGiving additional arguments to `load()` will override the arguments in the `BuilderConfig`.\r\n\r\n# CSV script\r\n\r\nThe `csv.py` script is provided as an example, usage is:\r\n```python\r\nbbc = nlp.load('/Users/thomwolf/Documents/GitHub/datasets/datasets/nlp/csv',\r\n name='bbc',\r\n version=\"1.0.1\",\r\n split='train',\r\n data_files={'train': ['/Users/thomwolf/Documents/GitHub/datasets/datasets/dummy_data/csv/test.csv']},\r\n skip_rows=10,\r\n download_mode='force_redownload')\r\n```\r\n\r\n# Checksums\r\n\r\nWe now don't raise an error if the checksum file is not found.\r\n\r\n# `DownloadConfig`\r\n\r\nWe now have a download configuration class to handle all the specific arguments for file caching like proxies, using only local files or user-agents.",
"Ok merging this for now.\r\n\r\nOne general note is that it's a bit hard to handle the `ClassLabel` generally in both `nlp` and `Arrow` since a class label typically need some metadata for the class names. For now, I raise a `NotImplementedError` when an `ArrowBuilder` output a table with a `DictionaryType` is encountered (which could be a simple equivalent for a `ClassLabel` Feature in Arrow tables).\r\n\r\nIn general and if we need this in the future for some Beam Datasets for instance, I think we should use one of the `metadata` fields in the `Arrow` type or table's schema to store the relation with indices and class names.\r\n\r\nSo ping me if you meet Beam datasets which uses `ClassLabels` (cc @lhoestq @patrickvonplaten @mariamabarham)."
] | 1,588,235,288,000 | 1,664,875,933,000 | 1,588,886,089,000 | CONTRIBUTOR | null | This is a PR allowing to create datasets from local CSV files. A usage might be:
```python
import nlp
ds = nlp.load(
path="csv",
name="bbc",
dataset_files={
nlp.Split.TRAIN: ["datasets/dummy_data/csv/train.csv"],
nlp.Split.TEST: [""datasets/dummy_data/csv/test.csv""]
},
csv_kwargs={
"skip_rows": 0,
"delimiter": ",",
"quote_char": "\"",
"header_as_column_names": True
}
)
```
```
Downloading and preparing dataset bbc/1.0.0 (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/bbc/1.0.0...
Dataset bbc downloaded and prepared to /home/jplu/.cache/huggingface/datasets/bbc/1.0.0. Subsequent calls will reuse this data.
{'test': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 49), 'train': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 99), 'validation': Dataset(schema: {'category': 'string', 'text': 'string'}, num_rows: 0)}
```
How it is read:
- `path`: the `csv` word means "I want to create a CSV dataset"
- `name`: the name of this dataset is `bbc`
- `dataset_files`: this is a dictionary where each key is the list of files corresponding to the key split.
- `csv_kwargs`: this is the keywords arguments to "explain" how to read the CSV files
* `skip_rows`: number of rows have to be skipped, starting from the beginning of the file
* `delimiter`: which delimiter is used to separate the columns
* `quote_char`: which quote char is used to represent a column where the delimiter appears in one of them
* `header_as_column_names`: will use the first row (header) of the file as name for the features. Otherwise the names will be automatically generated as `f1`, `f2`, etc... Will be applied after the `skip_rows` parameter.
**TODO**: for now the `csv.py` is copied each time we create a new dataset as `ds_name.py`, this behavior will be modified to have only the `csv.py` script copied only once and not for all the CSV datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/25/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/25/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/25",
"html_url": "https://github.com/huggingface/datasets/pull/25",
"diff_url": "https://github.com/huggingface/datasets/pull/25.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/25.patch",
"merged_at": "2020-05-07T21:14:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/24 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/24/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/24/comments | https://api.github.com/repos/huggingface/datasets/issues/24/events | https://github.com/huggingface/datasets/pull/24 | 609,064,987 | MDExOlB1bGxSZXF1ZXN0NDEwNzE5MTU0 | 24 | Add checksums | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks good to me :-) \r\n\r\nJust would prefer to get rid of the `_DYNAMICALLY_IMPORTED_MODULE` attribute and replace it by a `get_imported_module()` function. Maybe there is something I'm not seeing here though - what do you think? ",
"> * I'm not sure I understand the general organization of checksums. I see we have a checksum folder with potentially several checksum files but I also see that checksum files can potentially contain several checksums. Could you explain a bit more how this is organized?\r\n\r\nIt should look like this:\r\nsquad/\r\nβββ squad.py/\r\nβββ urls_checksums/\r\n...........βββ checksums.txt\r\n\r\nIn checksums.txt, the format is one line per (url, size, checksum)\r\n\r\nI don't have a strong opinion between `urls_checksums/checksums.txt` or directly `checksums.txt` (not inside the `urls_checksums` folder), let me know what you think.\r\n\r\n\r\n> * Also regarding your comment on checksum files for \"canonical\" datasets. I understand we can just create these with `nlp-cli test` and then upload them manually to our S3, right?\r\n\r\nYes you're right",
"Update of the commands:\r\n\r\n- nlp-cli test \\<dataset\\> : Run download_and_prepare and verify checksums\r\n * --name \\<name\\> : run only for the name\r\n * --all_configs : run for all configs\r\n * --save_checksums : instead of verifying checksums, compute and save them\r\n * --ignore_checksums : don't do checksums verification\r\n\r\n- nlp-cli upload \\<dataset_folder\\> : Upload a dataset\r\n * --upload_checksums : compute and upload checksums for uploaded files\r\n\r\nTODO:\r\n- don't overwrite checksums files on S3, to let the user upload a dataset in several steps if needed\r\n\r\nQuestion:\r\n- One idea from @patrickvonplaten : shall we upload checksums everytime we upload files ? (and therefore remove the upload_checksums parameter)",
"Ok, ready to merge, then @lhoestq ?",
"Yep :)"
] | 1,588,167,449,000 | 1,588,276,370,000 | 1,588,276,369,000 | MEMBER | null | ### Checksums files
They are stored next to the dataset script in urls_checksums/checksums.txt.
They are used to check the integrity of the datasets downloaded files.
I kept the same format as tensorflow-datasets.
There is one checksums file for all configs.
### Load a dataset
When you do `load("squad")`, it will also download the checksums file and put it next to the script in nlp/datasets/hash/urls_checksums/checksums.txt.
It also verifies that the downloaded files checksums match the expected ones.
You can ignore checksum tests with `load("squad", ignore_checksums=True)` (under the hood it just adds `ignore_checksums=True` in the `DownloadConfig`)
### Test a dataset
There is a new command `nlp-cli test squad` that runs `download_and_prepare` to see if it runs ok, and that verifies that all the checksums match. Allowed arguments are `--name`, `--all_configs`, `--ignore_checksums` and `--register_checksums`.
### Register checksums
1. If the dataset has external dataset files
The command `nlp-cli test squad --register_checksums --all_configs` runs `download_and_prepare` on all configs to see if it runs ok, and it creates the checksums file.
You can also register one config at a time using `--name` instead ; the checksums file will be completed and not overwritten.
If the script is a local script, the checksum file is moved to urls_checksums/checksums.txt next to the local script, to enable the user to upload both the script and the checksums file afterwards with `nlp-cli upload squad`.
2. If the dataset files are all inside the directory of the dataset script
The user can directly do `nlp-cli upload squad --register_checksums`, as there is no need to download anything.
In this case however, all the dataset must be uploaded at once.
--
PS : it doesn't allow to register checksums for canonical datasets, the file has to be added manually on S3 for now (I guess ?)
Also I feel like we must be sure that this processes would not constrain too much any user from uploading its dataset.
Let me know what you think :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/24/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/24/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/24",
"html_url": "https://github.com/huggingface/datasets/pull/24",
"diff_url": "https://github.com/huggingface/datasets/pull/24.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/24.patch",
"merged_at": "2020-04-30T19:52:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/23 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/23/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/23/comments | https://api.github.com/repos/huggingface/datasets/issues/23/events | https://github.com/huggingface/datasets/pull/23 | 608,508,706 | MDExOlB1bGxSZXF1ZXN0NDEwMjczOTU2 | 23 | Add metrics | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,096,925,000 | 1,664,875,916,000 | 1,589,185,178,000 | CONTRIBUTOR | null | This PR is a draft for adding metrics (sacrebleu and seqeval are added)
use case examples:
`import nlp`
**sacrebleu:**
```
refs = [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
sys = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
sacrebleu = nlp.load_metrics('sacrebleu')
print(sacrebleu.score)
```
**seqeval:**
```
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = nlp.load_metrics('seqeval')
print(seqeval.accuracy_score(y_true, y_pred)
print(seqeval.f1_score(y_true, y_pred)
```
_examples are taken from the corresponding web page_
your comments and suggestions are more than welcomed
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/23/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/23/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/23",
"html_url": "https://github.com/huggingface/datasets/pull/23",
"diff_url": "https://github.com/huggingface/datasets/pull/23.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/23.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/22 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/22/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/22/comments | https://api.github.com/repos/huggingface/datasets/issues/22/events | https://github.com/huggingface/datasets/pull/22 | 608,298,586 | MDExOlB1bGxSZXF1ZXN0NDEwMTAyMjU3 | 22 | adding bleu score code | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,588,078,850,000 | 1,588,096,100,000 | 1,588,096,088,000 | CONTRIBUTOR | null | this PR add the BLEU score metric to the lib. It can be tested by running the following code.
` from nlp.metrics import bleu
hyp1 = "It is a guide to action which ensures that the military always obeys the commands of the party"
ref1a = "It is a guide to action that ensures that the military forces always being under the commands of the party "
ref1b = "It is the guiding principle which guarantees the military force always being under the command of the Party"
ref1c = "It is the practical guide for the army always to heed the directions of the party"
list_of_references = [[ref1a, ref1b, ref1c]]
hypotheses = [hyp1]
bleu = bleu.bleu_score(list_of_references, hypotheses,4, smooth=True)
print(bleu) ` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/22/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/22/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/22",
"html_url": "https://github.com/huggingface/datasets/pull/22",
"diff_url": "https://github.com/huggingface/datasets/pull/22.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/22.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/21 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/21/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/21/comments | https://api.github.com/repos/huggingface/datasets/issues/21/events | https://github.com/huggingface/datasets/pull/21 | 607,914,185 | MDExOlB1bGxSZXF1ZXN0NDA5Nzk2MTM4 | 21 | Cleanup Features - Updating convert command - Fix Download manager | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For conflicts, I think the mention hint \"This should be modified because it mentions ...\" is missing.",
"Looks great!"
] | 1,588,029,415,000 | 1,588,325,387,000 | 1,588,325,386,000 | MEMBER | null | This PR makes a number of changes:
# Updating `Features`
Features are a complex mechanism provided in `tfds` to be able to modify a dataset on-the-fly when serializing to disk and when loading from disk.
We don't really need this because (1) it hides too much from the user and (2) our datatype can be directly mapped to Arrow tables on drive so we usually don't need to change the format before/after serialization.
This PR extracts and refactors these features in a single `features.py` files. It still keep a number of features classes for easy compatibility with tfds, namely the `Sequence`, `Tensor`, `ClassLabel` and `Translation` features.
Some more complex features involving a pre-processing on-the-fly during serialization are kept:
- `ClassLabel` which are able to convert from label strings to integers,
- `Translation`which does some check on the languages.
# Updating the `convert` command
We do a few updates here
- following the simplification of the `features` (cf above), conversion are updated
- we also makes it simpler to convert a single file
- some code need to be fixed manually after conversion (e.g. to remove some encoding processing in former tfds `Text` features. We highlight this code with a "git merge conflict" style syntax for easy manual fixing.
# Fix download manager iterator
You kept me up quite late on Tuesday night with this `os.scandir` change @lhoestq ;-)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/21/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/21/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/21",
"html_url": "https://github.com/huggingface/datasets/pull/21",
"diff_url": "https://github.com/huggingface/datasets/pull/21.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/21.patch",
"merged_at": "2020-05-01T09:29:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/20 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/20/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/20/comments | https://api.github.com/repos/huggingface/datasets/issues/20/events | https://github.com/huggingface/datasets/pull/20 | 607,313,557 | MDExOlB1bGxSZXF1ZXN0NDA5MzEyMDI1 | 20 | remove boto3 and promise dependencies | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,973,185,000 | 1,588,003,457,000 | 1,587,996,945,000 | MEMBER | null | With the new download manager, we don't need `promise` anymore.
I also removed `boto3` as in [this pr](https://github.com/huggingface/transformers/pull/3968) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/20/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/20/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/20",
"html_url": "https://github.com/huggingface/datasets/pull/20",
"diff_url": "https://github.com/huggingface/datasets/pull/20.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/20.patch",
"merged_at": "2020-04-27T14:15:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/19 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/19/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/19/comments | https://api.github.com/repos/huggingface/datasets/issues/19/events | https://github.com/huggingface/datasets/pull/19 | 606,400,645 | MDExOlB1bGxSZXF1ZXN0NDA4NjIwMjUw | 19 | Replace tf.constant for TF | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome!"
] | 1,587,742,326,000 | 1,588,152,428,000 | 1,587,849,525,000 | CONTRIBUTOR | null | Replace simple tf.constant type of Tensor to tf.ragged.constant which allows to have examples of different size in a tf.data.Dataset.
Now the training works with TF. Here the same example than for the PT in collab:
```python
import tensorflow as tf
import nlp
from transformers import BertTokenizerFast, TFBertForQuestionAnswering
# Load our training dataset and tokenizer
train_dataset = nlp.load('squad', split="train[:1%]")
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
def get_correct_alignement(context, answer):
start_idx = answer['answer_start'][0]
text = answer['text'][0]
end_idx = start_idx + len(text)
if context[start_idx:end_idx] == text:
return start_idx, end_idx # When the gold label position is good
elif context[start_idx-1:end_idx-1] == text:
return start_idx-1, end_idx-1 # When the gold label is off by one character
elif context[start_idx-2:end_idx-2] == text:
return start_idx-2, end_idx-2 # When the gold label is off by two character
else:
raise ValueError()
# Tokenize our training dataset
def convert_to_features(example_batch):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = list(zip(example_batch['context'], example_batch['question']))
encodings = tokenizer.batch_encode_plus(input_pairs, pad_to_max_length=True)
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methods.
start_positions, end_positions = [], []
for i, (context, answer) in enumerate(zip(example_batch['context'], example_batch['answers'])):
start_idx, end_idx = get_correct_alignement(context, answer)
start_positions.append([encodings.char_to_token(i, start_idx)])
end_positions.append([encodings.char_to_token(i, end_idx-1)])
if start_positions and end_positions:
encodings.update({'start_positions': start_positions,
'end_positions': end_positions})
return encodings
train_dataset = train_dataset.map(convert_to_features, batched=True)
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_dataset[x] for x in columns[:3]}
labels = {"output_1": train_dataset["start_positions"]}
labels["output_2"] = train_dataset["end_positions"]
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
model = TFBertForQuestionAnswering.from_pretrained("bert-base-cased")
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)
opt = tf.keras.optimizers.Adam(learning_rate=3e-5)
model.compile(optimizer=opt,
loss={'output_1': loss_fn, 'output_2': loss_fn},
loss_weights={'output_1': 1., 'output_2': 1.},
metrics=['accuracy'])
model.fit(tfdataset, epochs=1, steps_per_epoch=3)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/19/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/19/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/19",
"html_url": "https://github.com/huggingface/datasets/pull/19",
"diff_url": "https://github.com/huggingface/datasets/pull/19.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/19.patch",
"merged_at": "2020-04-25T21:18:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/18 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/18/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/18/comments | https://api.github.com/repos/huggingface/datasets/issues/18/events | https://github.com/huggingface/datasets/pull/18 | 606,109,196 | MDExOlB1bGxSZXF1ZXN0NDA4Mzg0MTc3 | 18 | Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LGTM"
] | 1,587,713,988,000 | 1,588,174,048,000 | 1,588,089,988,000 | MEMBER | null | This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time).
# Style & quality:
You can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8.
You can then clean the style and check the quality before merging your PR with:
```bash
make style
make quality
```
# Allow dependencies in dataset processing scripts
We can now allow (some level) of imports in dataset processing scripts (in addition to PyPi imports).
Namely, you can do the two following things:
Import from a relative path to a file in the same folder as the dataset processing script:
```python
import .c4_utils
```
Or import from a relative path to a file in a folder/archive/github repo to which you provide an URL after the import state with `# From: [URL]`:
```python
import .clicr.dataset_code.build_json_dataset # From: https://github.com/clips/clicr
```
In both these cases, after downloading the main dataset processing script, we will identify the location of these dependencies, download them and copy them in the dataset processing script folder.
Note that only direct import in the dataset processing script will be handled.
We don't recursively explore the additional import to download further files.
Also, when we download from an additional directory (in the second case above), we recursively add `__init__.py` to all the sub-folder so you can import from them.
This part is still tested for now. If you've seen datasets which required external utilities, tell me and I can test it.
# Update the cache to have a better local structure
The local structure in the `src/datasets` folder is now: `src/datasets/DATASET_NAME/DATASET_HASH/*`
The hash is computed from the full code of the dataset processing script as well as all the local and downloaded dependencies as mentioned above. This way if you change some code in a utility related to your dataset, a new hash should be computed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/18/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/18/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/18",
"html_url": "https://github.com/huggingface/datasets/pull/18",
"diff_url": "https://github.com/huggingface/datasets/pull/18.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/18.patch",
"merged_at": "2020-04-28T16:06:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/17 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/17/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/17/comments | https://api.github.com/repos/huggingface/datasets/issues/17/events | https://github.com/huggingface/datasets/pull/17 | 605,753,027 | MDExOlB1bGxSZXF1ZXN0NDA4MDk3NjM0 | 17 | Add Pandas as format type | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,666,014,000 | 1,588,010,870,000 | 1,588,010,868,000 | CONTRIBUTOR | null | As detailed in the title ^^ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/17/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/17/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/17",
"html_url": "https://github.com/huggingface/datasets/pull/17",
"diff_url": "https://github.com/huggingface/datasets/pull/17.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/17.patch",
"merged_at": "2020-04-27T18:07:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/16 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/16/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/16/comments | https://api.github.com/repos/huggingface/datasets/issues/16/events | https://github.com/huggingface/datasets/pull/16 | 605,661,462 | MDExOlB1bGxSZXF1ZXN0NDA4MDIyMTUz | 16 | create our own DownloadManager | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks great to me! ",
"The new download manager is ready. I removed the old folder and I fixed a few remaining dependencies.\r\nI tested it on squad and a few others from the dataset folder and it works fine.\r\n\r\nThe only impact of these changes is that it breaks the `download_and_prepare` script that was used to register the checksums when we create a dataset, as the checksum logic is not implemented.\r\n\r\nLet me know if you have remarks",
"Ok merged it (a bit fast for you to update the copyright, now I see that. but it's ok, we'll do a pass on these doc/copyright before releasing anyway)",
"Actually two additional things here @lhoestq (I merged too fast sorry, let's make a new PR for additional developments):\r\n- I think we can remove some dependencies now (e.g. `promises`) in setup.py, can you have a look?\r\n- also, I think we can remove the boto3 dependency like here: https://github.com/huggingface/transformers/pull/3968"
] | 1,587,658,087,000 | 1,620,239,124,000 | 1,587,849,910,000 | MEMBER | null | I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution.
With this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine.
For the implementation, what I did exactly:
- I copied the old download manager
- I removed all the dependences to the old `download` files
- I replaced all the download + extract calls by calls to `cached_path`
- I removed unused parameters (extract_dir, compute_stats) (maybe compute_stats could be re-added later if we want to compute stats...)
- I left some functions unimplemented for now. We will probably have to implement them because they are used by some datasets scripts (download_kaggle_data, iter_archive) or because we may need them at some point (download_checksums, _record_sizes_checksums)
Let me know if you think that this is going the right direction or if you have remarks.
Note: I didn't write any test yet as I wanted to read your remarks first | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/16/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/16/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/16",
"html_url": "https://github.com/huggingface/datasets/pull/16",
"diff_url": "https://github.com/huggingface/datasets/pull/16.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/16.patch",
"merged_at": "2020-04-25T21:25:10"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/15 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/15/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/15/comments | https://api.github.com/repos/huggingface/datasets/issues/15/events | https://github.com/huggingface/datasets/pull/15 | 604,906,708 | MDExOlB1bGxSZXF1ZXN0NDA3NDEwOTk3 | 15 | [Tests] General Test Design for all dataset scripts | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> I think I'm fine with this.\r\n> \r\n> The alternative would be to host a small subset of the dataset on the S3 together with the testing script. But I think having all (test file creation + actual tests) in one file is actually quite convenient.\r\n> \r\n> Good for me!\r\n> \r\n> One question though, will we have to create one test file for each of the 100+ datasets or could we make some automatic conversion from tfds dataset test files?\r\n\r\nI think if we go the way shown in the PR we would have to create one test file for each of the 100+ datasets. \r\n\r\nAs far as I know the tfds test files all rely on the user having created a special download folder structure in `tensorflow-datasets/tensorflow_datasets/testing/test_data/fake_examples`. \r\n\r\nMy hypothesis was: \r\nBecasue, we don't want to work with PRs, no `dataset_script` is going to be in the official repo, so no `dataset_script_test` can be in the repo either. Therefore we can also not have any \"fake\" test folder structure in the repo. \r\n\r\n**BUT:** As you mentioned @thom, we could have a fake data structure on AWS. To add a test the user has to upload multiple small test files when uploading his data set script. \r\n\r\nSo for a cli this could look like:\r\n`python nlp-cli upload <data_set_script> --testfiles <relative path to test file 1> <relative path to test file 2> ...` \r\n\r\nor even easier if the user just creates the dataset folder with the script inside and the testing folder structure, then the API could look like:\r\n\r\n`python nlp-cli upload <path/to/dataset/folder>`\r\n\r\nand the dataset folder would look like\r\n```\r\nsquad\r\n- squad.py\r\n- fake_data # this dir would have to have the exact same structure we get when downloading from the official squad data url\r\n```\r\n\r\nThis way I think we wouldn't even need any test files at all for each dataset script. For special datasets like `c4` or `wikipedia` we could then allow to optionally upload another test script. \r\nWe just assume that this is our downloaded `url` and check all functionality from there. \r\n\r\nThinking a bit more about this solution sounds a) much less work and b) even easier for the user.\r\n\r\nA small problem I see here though:\r\n1) What do we do when the depending on the config name the downloaded folder structure is very different? I think for each dataset config name we should have one test, which could correspond to one \"fake\" folder structure on AWS\r\n\r\n@thomwolf What do you think? I would actually go for this solution instead now.\r\n@mariamabarham You have written many more tfds dataset scripts and tests than I have - what do you think? \r\n\r\n",
"Regarding the tfds tests, I don't really see a point in keeping them because:\r\n\r\n1) If you provide a fake data structure, IMO there is no need for each dataset to have an individual test file because (I think) most datasets have the same functions `_split_generators` and `_generate_examples` for which you can just test the functionality in a common test file. For special functions like these beam / pipeline functionality you probably need an extra test file. But @mariamabarham I think you have seen more than I have here as well \r\n\r\n2) The dataset test design is very much intertwined with the download manager design and contains a lot of code. I would like to seperate the tests into a) tests for downloading in general b) tests for post download data set pre-processing. Since we are going to change the download code anyways quite a lot, my plan was to focus on b) first. ",
"I like the idea of having a fake data folder on S3. I have seen datasets with nested compressed files structures that would be tedious to generate with code. And for users it is probably easier to create a fake data folder by taking a subset of the actual data, and then upload it as you said.",
"> > I think I'm fine with this.\r\n> > The alternative would be to host a small subset of the dataset on the S3 together with the testing script. But I think having all (test file creation + actual tests) in one file is actually quite convenient.\r\n> > Good for me!\r\n> > One question though, will we have to create one test file for each of the 100+ datasets or could we make some automatic conversion from tfds dataset test files?\r\n> \r\n> I think if we go the way shown in the PR we would have to create one test file for each of the 100+ datasets.\r\n> \r\n> As far as I know the tfds test files all rely on the user having created a special download folder structure in `tensorflow-datasets/tensorflow_datasets/testing/test_data/fake_examples`.\r\n> \r\n> My hypothesis was:\r\n> Becasue, we don't want to work with PRs, no `dataset_script` is going to be in the official repo, so no `dataset_script_test` can be in the repo either. Therefore we can also not have any \"fake\" test folder structure in the repo.\r\n> \r\n> **BUT:** As you mentioned @thom, we could have a fake data structure on AWS. To add a test the user has to upload multiple small test files when uploading his data set script.\r\n> \r\n> So for a cli this could look like:\r\n> `python nlp-cli upload <data_set_script> --testfiles <relative path to test file 1> <relative path to test file 2> ...`\r\n> \r\n> or even easier if the user just creates the dataset folder with the script inside and the testing folder structure, then the API could look like:\r\n> \r\n> `python nlp-cli upload <path/to/dataset/folder>`\r\n> \r\n> and the dataset folder would look like\r\n> \r\n> ```\r\n> squad\r\n> - squad.py\r\n> - fake_data # this dir would have to have the exact same structure we get when downloading from the official squad data url\r\n> ```\r\n> \r\n> This way I think we wouldn't even need any test files at all for each dataset script. For special datasets like `c4` or `wikipedia` we could then allow to optionally upload another test script.\r\n> We just assume that this is our downloaded `url` and check all functionality from there.\r\n> \r\n> Thinking a bit more about this solution sounds a) much less work and b) even easier for the user.\r\n> \r\n> A small problem I see here though:\r\n> \r\n> 1. What do we do when the depending on the config name the downloaded folder structure is very different? I think for each dataset config name we should have one test, which could correspond to one \"fake\" folder structure on AWS\r\n> \r\n> @thomwolf What do you think? I would actually go for this solution instead now.\r\n> @mariamabarham You have written many more tfds dataset scripts and tests than I have - what do you think?\r\n\r\nI'm agreed with you just one thing, for some dataset like glue or xtreme you may have multiple datasets in it. so I think a good way is to have one main fake folder and a subdirectory for each dataset inside",
"> Regarding the tfds tests, I don't really see a point in keeping them because:\r\n> \r\n> 1. If you provide a fake data structure, IMO there is no need for each dataset to have an individual test file because (I think) most datasets have the same functions `_split_generators` and `_generate_examples` for which you can just test the functionality in a common test file. For special functions like these beam / pipeline functionality you probably need an extra test file. But @mariamabarham I think you have seen more than I have here as well\r\n> 2. The dataset test design is very much intertwined with the download manager design and contains a lot of code. I would like to seperate the tests into a) tests for downloading in general b) tests for post download data set pre-processing. Since we are going to change the download code anyways quite a lot, my plan was to focus on b) first.\r\n\r\nFor _split_generator, yes. But I'm not sure for _generate_examples because there is lots of things that should be taken into account such as feature names and types, data format (json, jsonl, csv, tsv,..)",
"Sounds good to me!\r\n\r\nWhen testing, we could thus just override the prefix in the URL inside the download manager to have them point to the test directory on our S3.\r\n\r\nCc @lhoestq ",
"Ok, here is a second draft for the testing structure. \r\n\r\nI think the big difficulty here is \"How can you generate tests on the fly from a given dataset name, *e.g.* `squad`\"?\r\n\r\nSo, this morning I did some research on \"parameterized testing\" and pure `unittest` or `pytest` didn't work very well. \r\nI found the lib https://github.com/wolever/parameterized, which works very nicely for our use case I think. \r\n@thomwolf - would it be ok to have a dependence on this lib for `nlp`? It seems like a light-weight lib to me. \r\n\r\nThis lib allows to add a `parameterization` decorator to a `unittest.TestCase` class so that the class can be instantiated for multiple different arguments (which are the dataset names `squad` etc. in our case).\r\n\r\nWhat I like about this lib is that one only has to add the decorator and the each of the parameterized tests are shown, like this: \r\n\r\n![Screenshot from 2020-04-24 15-13-14](https://user-images.githubusercontent.com/23423619/80216326-2bd9a680-863e-11ea-8a0f-460976f5309c.png)\r\n\r\nWith this structure we would only have to upload the dummy data for each dataset and would not require a specific testing file. \r\n\r\nWhat do you think @thomwolf @mariamabarham @lhoestq ?",
"I think this is a nice solution.\r\n\r\nDo you think we could have the `parametrized` dependency in a `[test]` optional installation of `setup.py`? I would really like to keep the dependencies of the standard installation as small as possible. ",
"> I think this is a nice solution.\r\n> \r\n> Do you think we could have the `parametrized` dependency in a `[test]` optional installation of `setup.py`? I would really like to keep the dependencies of the standard installation as small as possible.\r\n\r\nYes definitely!",
"UPDATE: \r\n\r\nThis test design is ready now. I added dummy data to S3 for the dataests: `squad, crime_and_punish, sentiment140` . The structure can be seen on `https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/squad/dummy/?region=us-east-1&tab=overview` for `squad`. \r\n\r\nAll dummy data files have to be in .zip format and called `dummy_data.zip`. The zip file should thereby have the exact same folder structure one gets from downloading the \"real\" data url(s). \r\n\r\nTo show how the .zip file looks like for the added datasets, I added the folder `nlp/datasets/dummy_data` in this PR. I think we can leave for the moment so that people can see better how to add dummy data tests and later delete it like `nlp/datasets/nlp`."
] | 1,587,573,961,000 | 1,664,875,914,000 | 1,587,998,882,000 | MEMBER | null | The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as minimal as possible.
In order to test whether the specific data set class can download the data and generate the examples **without** downloading the actual data all the time, a MockDataLoaderManager class is used which receives a `mock_folder_structure_fn` function from each individual dataset test file that create "fake" data and which returns the same folder structure that would have been created when using the real data downloader. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/15/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/15/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/15",
"html_url": "https://github.com/huggingface/datasets/pull/15",
"diff_url": "https://github.com/huggingface/datasets/pull/15.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/15.patch",
"merged_at": "2020-04-27T14:48:02"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/14 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/14/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/14/comments | https://api.github.com/repos/huggingface/datasets/issues/14/events | https://github.com/huggingface/datasets/pull/14 | 604,761,315 | MDExOlB1bGxSZXF1ZXN0NDA3MjkzNjU5 | 14 | [Download] Only create dir if not already exist | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,562,371,000 | 1,664,875,910,000 | 1,587,630,453,000 | MEMBER | null | This was quite annoying to find out :D.
Some datasets have save in the same directory. So we should only create a new directory if it doesn't already exist. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/14/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/14/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/14",
"html_url": "https://github.com/huggingface/datasets/pull/14",
"diff_url": "https://github.com/huggingface/datasets/pull/14.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/14.patch",
"merged_at": "2020-04-23T08:27:33"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/13 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/13/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/13/comments | https://api.github.com/repos/huggingface/datasets/issues/13/events | https://github.com/huggingface/datasets/pull/13 | 604,547,951 | MDExOlB1bGxSZXF1ZXN0NDA3MTIxMjkw | 13 | [Make style] | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this can be quickly reproduced. \r\nI use `black, version 19.10b0`. \r\n\r\nWhen running: \r\n`black nlp/src/arrow_reader.py` \r\nit gives me: \r\n\r\n```\r\nerror: cannot format /home/patrick/hugging_face/nlp/src/nlp/arrow_reader.py: cannot use --safe with this file; failed to parse source file. AST error message: invalid syntax (<unknown>, line 78)\r\nOh no! π₯ π π₯\r\n1 file failed to reformat.\r\n```\r\n\r\nThe line in question is: \r\nhttps://github.com/huggingface/nlp/blob/6922a16705e61f9e31a365f2606090b84d49241f/src/nlp/arrow_reader.py#L78\r\n\r\nWhat is weird is that the trainer file in `transformers` has more or less the same syntax and black does not fail there: \r\nhttps://github.com/huggingface/transformers/blob/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d/src/transformers/trainer.py#L95\r\n\r\nI googled quite a bit about black & typing hints yesterday and didn't find anything useful. \r\nAny ideas @thomwolf @julien-c @LysandreJik ?",
"> I think this can be quickly reproduced.\r\n> I use `black, version 19.10b0`.\r\n> \r\n> When running:\r\n> `black nlp/src/arrow_reader.py`\r\n> it gives me:\r\n> \r\n> ```\r\n> error: cannot format /home/patrick/hugging_face/nlp/src/nlp/arrow_reader.py: cannot use --safe with this file; failed to parse source file. AST error message: invalid syntax (<unknown>, line 78)\r\n> Oh no! π₯ π π₯\r\n> 1 file failed to reformat.\r\n> ```\r\n> \r\n> The line in question is:\r\n> https://github.com/huggingface/nlp/blob/6922a16705e61f9e31a365f2606090b84d49241f/src/nlp/arrow_reader.py#L78\r\n> \r\n> What is weird is that the trainer file in `transformers` has more or less the same syntax and black does not fail there:\r\n> https://github.com/huggingface/transformers/blob/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d/src/transformers/trainer.py#L95\r\n> \r\n> I googled quite a bit about black & typing hints yesterday and didn't find anything useful.\r\n> Any ideas @thomwolf @julien-c @LysandreJik ?\r\n\r\nOk I found the problem. It was the one Julien mentioned and has nothing to do with this line. Black's error message is a bit misleading here, I guess",
"Ok, just had to remove the python 2 syntax comments `# type`. \r\n\r\nGood to merge for me now @thomwolf "
] | 1,587,543,006,000 | 1,664,875,911,000 | 1,587,646,942,000 | MEMBER | null | Added Makefile and applied make style to all.
make style runs the following code:
```
style:
black --line-length 119 --target-version py35 src
isort --recursive src
```
It's the same code that is run in `transformers`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/13/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/13/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/13",
"html_url": "https://github.com/huggingface/datasets/pull/13",
"diff_url": "https://github.com/huggingface/datasets/pull/13.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/13.patch",
"merged_at": "2020-04-23T13:02:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/12 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/12/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/12/comments | https://api.github.com/repos/huggingface/datasets/issues/12/events | https://github.com/huggingface/datasets/pull/12 | 604,518,583 | MDExOlB1bGxSZXF1ZXN0NDA3MDk3MzA4 | 12 | [Map Function] add assert statement if map function does not return dict or None | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also added to an assert statement that if a dict is returned by function, all values of `dicts` are `lists`",
"Wait to merge until `make style` is set in place.",
"Updated the assert statements. Played around with multiple cases and it should be good now IMO. "
] | 1,587,540,084,000 | 1,664,875,913,000 | 1,587,709,743,000 | MEMBER | null | IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised.
Not sure whether you had cases in mind where the user should do something else @thomwolf , but I think a lot of silent errors can be avoided with this assert statement. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/12/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/12/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/12",
"html_url": "https://github.com/huggingface/datasets/pull/12",
"diff_url": "https://github.com/huggingface/datasets/pull/12.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/12.patch",
"merged_at": "2020-04-24T06:29:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/11 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/11/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/11/comments | https://api.github.com/repos/huggingface/datasets/issues/11/events | https://github.com/huggingface/datasets/pull/11 | 603,921,624 | MDExOlB1bGxSZXF1ZXN0NDA2NjExODk2 | 11 | [Convert TFDS to HFDS] Extend script to also allow just converting a single file | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,468,333,000 | 1,664,875,906,000 | 1,587,502,020,000 | MEMBER | null | Adds another argument to be able to convert only a single file | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/11/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/11/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/11",
"html_url": "https://github.com/huggingface/datasets/pull/11",
"diff_url": "https://github.com/huggingface/datasets/pull/11.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/11.patch",
"merged_at": "2020-04-21T20:47:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/10 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/10/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/10/comments | https://api.github.com/repos/huggingface/datasets/issues/10/events | https://github.com/huggingface/datasets/pull/10 | 603,909,327 | MDExOlB1bGxSZXF1ZXN0NDA2NjAxNzQ2 | 10 | Name json file "squad.json" instead of "squad.py.json" | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,467,068,000 | 1,664,875,904,000 | 1,587,502,086,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/10/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/10/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/10",
"html_url": "https://github.com/huggingface/datasets/pull/10",
"diff_url": "https://github.com/huggingface/datasets/pull/10.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/10.patch",
"merged_at": "2020-04-21T20:48:06"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/9 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/9/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/9/comments | https://api.github.com/repos/huggingface/datasets/issues/9/events | https://github.com/huggingface/datasets/pull/9 | 603,894,874 | MDExOlB1bGxSZXF1ZXN0NDA2NTkwMDQw | 9 | [Clean up] Datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes!"
] | 1,587,465,596,000 | 1,664,875,902,000 | 1,587,502,198,000 | MEMBER | null | Clean up `nlp/datasets` folder.
As I understood, eventually the `nlp/datasets` shall not exist anymore at all.
The folder `nlp/datasets/nlp` is kept for the moment, but won't be needed in the future, since it will live on S3 (actually it already does) at: `https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/?region=us-east-1` and the different `dataset downloader scripts will be added to `nlp/src/nlp` when downloaded by the user.
The folder `nlp/datasets/checksums` is kept for now, but won't be needed anymore in the future.
The remaining folders/ files are leftovers from tensorflow-datasets and are not needed. The can be looked up in the private tensorflow-dataset repo. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/9/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/9/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/9",
"html_url": "https://github.com/huggingface/datasets/pull/9",
"diff_url": "https://github.com/huggingface/datasets/pull/9.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/9.patch",
"merged_at": "2020-04-21T20:49:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/8 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8/comments | https://api.github.com/repos/huggingface/datasets/issues/8/events | https://github.com/huggingface/datasets/pull/8 | 601,783,243 | MDExOlB1bGxSZXF1ZXN0NDA0OTg0NDUz | 8 | Fix issue 6: error when the citation is missing in the DatasetInfo | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,110,666,000 | 1,588,152,431,000 | 1,587,389,052,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8",
"html_url": "https://github.com/huggingface/datasets/pull/8",
"diff_url": "https://github.com/huggingface/datasets/pull/8.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8.patch",
"merged_at": "2020-04-20T13:24:12"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/7 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7/comments | https://api.github.com/repos/huggingface/datasets/issues/7/events | https://github.com/huggingface/datasets/pull/7 | 601,780,534 | MDExOlB1bGxSZXF1ZXN0NDA0OTgyMzA2 | 7 | Fix issue 5: allow empty datasets | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,587,110,396,000 | 1,588,152,433,000 | 1,587,389,028,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7",
"html_url": "https://github.com/huggingface/datasets/pull/7",
"diff_url": "https://github.com/huggingface/datasets/pull/7.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7.patch",
"merged_at": "2020-04-20T13:23:47"
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/6 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6/comments | https://api.github.com/repos/huggingface/datasets/issues/6/events | https://github.com/huggingface/datasets/issues/6 | 600,330,836 | MDU6SXNzdWU2MDAzMzA4MzY= | 6 | Error when citation is not given in the DatasetInfo | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes looks good to me.\r\nNote that we may refactor quite strongly the `info.py` to make it a lot simpler (it's very complicated for basically a dictionary of info I think)",
"No, problem ^^ It might just be a temporary fix :)",
"Fixed."
] | 1,586,960,094,000 | 1,588,152,202,000 | 1,588,152,202,000 | CONTRIBUTOR | null | The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
AttributeError: 'NoneType' object has no attribute 'strip'
```
I propose to do the following change in the `info.py` file. The method:
```python
def __repr__(self):
splits_pprint = _indent("\n".join(["{"] + [
" '{}': {},".format(k, split.num_examples)
for k, split in sorted(self.splits.items())
] + ["}"]))
features_pprint = _indent(repr(self.features))
citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
return INFO_STR.format(
name=self.name,
version=self.version,
description=self.description,
total_num_examples=self.splits.total_num_examples,
features=features_pprint,
splits=splits_pprint,
citation=citation_pprint,
homepage=self.homepage,
supervised_keys=self.supervised_keys,
# Proto add a \n that we strip.
license=str(self.license).strip())
```
Becomes:
```python
def __repr__(self):
splits_pprint = _indent("\n".join(["{"] + [
" '{}': {},".format(k, split.num_examples)
for k, split in sorted(self.splits.items())
] + ["}"]))
features_pprint = _indent(repr(self.features))
## the strip is done only is the citation is given
citation_pprint = self.citation
if self.citation:
citation_pprint = _indent('"""{}"""'.format(self.citation.strip()))
return INFO_STR.format(
name=self.name,
version=self.version,
description=self.description,
total_num_examples=self.splits.total_num_examples,
features=features_pprint,
splits=splits_pprint,
citation=citation_pprint,
homepage=self.homepage,
supervised_keys=self.supervised_keys,
# Proto add a \n that we strip.
license=str(self.license).strip())
```
And now it is ok. @thomwolf are you ok with this fix? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5/comments | https://api.github.com/repos/huggingface/datasets/issues/5/events | https://github.com/huggingface/datasets/issues/5 | 600,295,889 | MDU6SXNzdWU2MDAyOTU4ODk= | 5 | ValueError when a split is empty | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"To fix this I propose to modify only the file `arrow_reader.py` with few updates. First update, the following method:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n # For each split, return the files instruction (skip/take)\r\n file_instructions = []\r\n num_examples = 0\r\n for abs_instr in absolute_instructions:\r\n length = name2len[abs_instr.splitname]\r\n if not length:\r\n raise ValueError(\r\n 'Split empty. This might means that dataset hasn\\'t been generated '\r\n 'yet and info not restored from GCS, or that legacy dataset is used.')\r\n filename = filename_for_dataset_split(\r\n dataset_name=name,\r\n split=abs_instr.splitname,\r\n filetype_suffix='arrow')\r\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n to = length if abs_instr.to is None else abs_instr.to\r\n num_examples += to - from_\r\n single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n file_instructions.extend(single_file_instructions)\r\n return FileInstructions(\r\n num_examples=num_examples,\r\n file_instructions=file_instructions,\r\n )\r\n```\r\nBecomes:\r\n```python\r\ndef _make_file_instructions_from_absolutes(\r\n name,\r\n name2len,\r\n absolute_instructions,\r\n):\r\n \"\"\"Returns the files instructions from the absolute instructions list.\"\"\"\r\n # For each split, return the files instruction (skip/take)\r\n file_instructions = []\r\n num_examples = 0\r\n for abs_instr in absolute_instructions:\r\n length = name2len[abs_instr.splitname]\r\n ## Delete the if not length and the raise\r\n filename = filename_for_dataset_split(\r\n dataset_name=name,\r\n split=abs_instr.splitname,\r\n filetype_suffix='arrow')\r\n from_ = 0 if abs_instr.from_ is None else abs_instr.from_\r\n to = length if abs_instr.to is None else abs_instr.to\r\n num_examples += to - from_\r\n single_file_instructions = [{\"filename\": filename, \"skip\": from_, \"take\": to - from_}]\r\n file_instructions.extend(single_file_instructions)\r\n return FileInstructions(\r\n num_examples=num_examples,\r\n file_instructions=file_instructions,\r\n )\r\n```\r\n\r\nSecond update the following method:\r\n```python\r\ndef _read_files(files, info):\r\n \"\"\"Returns Dataset for given file instructions.\r\n\r\n Args:\r\n files: List[dict(filename, skip, take)], the files information.\r\n The filenames contain the absolute path, not relative.\r\n skip/take indicates which example read in the file: `ds.slice(skip, take)`\r\n \"\"\"\r\n pa_batches = []\r\n for f_dict in files:\r\n pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n pa_batches.extend(pa_table.to_batches())\r\n pa_table = pa.Table.from_batches(pa_batches)\r\n ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n return ds\r\n```\r\nBecomes:\r\n```python\r\ndef _read_files(files, info):\r\n \"\"\"Returns Dataset for given file instructions.\r\n\r\n Args:\r\n files: List[dict(filename, skip, take)], the files information.\r\n The filenames contain the absolute path, not relative.\r\n skip/take indicates which example read in the file: `ds.slice(skip, take)`\r\n \"\"\"\r\n pa_batches = []\r\n for f_dict in files:\r\n pa_table: pa.Table = _get_dataset_from_filename(f_dict)\r\n pa_batches.extend(pa_table.to_batches())\r\n ## we modify the table only if there are some batches\r\n if pa_batches:\r\n pa_table = pa.Table.from_batches(pa_batches)\r\n ds = Dataset(arrow_table=pa_table, data_files=files, info=info)\r\n return ds\r\n```\r\n\r\nWith these two updates it works now. @thomwolf are you ok with this changes?",
"Yes sounds good to me!\r\nDo you want to make a PR? or I can do it as well",
"Fixed."
] | 1,586,957,113,000 | 1,588,152,185,000 | 1,588,152,185,000 | CONTRIBUTOR | null | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 587, in as_dataset
datasets = utils.map_nested(build_single_dataset, split, map_tuple=True)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in map_nested
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 158, in <dictcomp>
for k, v in data_struct.items()
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 601, in _build_single_dataset
split=split,
File "/home/jplu/dev/jplu/datasets/src/nlp/builder.py", line 625, in _as_dataset
split_infos=self.info.splits.values(),
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 200, in read
return py_utils.map_nested(_read_instruction_to_ds, instructions)
File "/home/jplu/dev/jplu/datasets/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 191, in _read_instruction_to_ds
file_instructions = make_file_instructions(name, split_infos, instruction)
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 104, in make_file_instructions
absolute_instructions=absolute_instructions,
File "/home/jplu/dev/jplu/datasets/src/nlp/arrow_reader.py", line 122, in _make_file_instructions_from_absolutes
'Split empty. This might means that dataset hasn\'t been generated '
ValueError: Split empty. This might means that dataset hasn't been generated yet and info not restored from GCS, or that legacy dataset is used.
```
How to reproduce:
```python
import csv
import nlp
class Bbc(nlp.GeneratorBasedBuilder):
VERSION = nlp.Version("1.0.0")
def __init__(self, **config):
self.train = config.pop("train", None)
self.validation = config.pop("validation", None)
super(Bbc, self).__init__(**config)
def _info(self):
return nlp.DatasetInfo(builder=self, description="bla", features=nlp.features.FeaturesDict({"id": nlp.int32, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": self.train}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": self.validation}),
nlp.SplitGenerator(name=nlp.Split.TEST, gen_kwargs={"filepath": None})]
def _generate_examples(self, filepath):
if not filepath:
return None, {}
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"id": idx, "text": line[1], "label": line[0]}
```
```python
import nlp
dataset = nlp.load("bbc", builder_kwargs={"train": "bbc/data/train.csv", "validation": "bbc/data/test.csv"})
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4/comments | https://api.github.com/repos/huggingface/datasets/issues/4/events | https://github.com/huggingface/datasets/issues/4 | 600,185,417 | MDU6SXNzdWU2MDAxODU0MTc= | 4 | [Feature] Keep the list of labels of a dataset as metadata | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes! I see mostly two options for this:\r\n- a `Feature` approach like currently (but we might deprecate features)\r\n- wrapping in a smart way the Dictionary arrays of Arrow: https://arrow.apache.org/docs/python/data.html?highlight=dictionary%20encode#dictionary-arrays",
"I would have a preference for the second bullet point.",
"This should be accessible now as a feature in dataset.info.features (and even have the mapping methods).",
"Perfect! Well done!!",
"Hi,\r\nI hope we could get a better documentation.\r\nIt took me more than 1 hour to found this way to get the label information.",
"Yes we are working on the doc right now, should be in the next release quite soon."
] | 1,586,945,830,000 | 1,594,227,586,000 | 1,588,572,717,000 | CONTRIBUTOR | null | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3/comments | https://api.github.com/repos/huggingface/datasets/issues/3/events | https://github.com/huggingface/datasets/issues/3 | 600,180,050 | MDU6SXNzdWU2MDAxODAwNTA= | 3 | [Feature] More dataset outputs | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yes!\r\n- pandas will be a one-liner in `arrow_dataset`: https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_pandas\r\n- for Spark I have no idea. let's investigate that at some point",
"For Spark it looks to be pretty straightforward as well https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html but looks to be having a dependency to Spark is necessary, then nevermind we can skip it",
"Now Pandas is available."
] | 1,586,945,294,000 | 1,588,572,747,000 | 1,588,572,747,000 | CONTRIBUTOR | null | Add the following dataset outputs:
- Spark
- Pandas | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2/comments | https://api.github.com/repos/huggingface/datasets/issues/2/events | https://github.com/huggingface/datasets/issues/2 | 599,767,671 | MDU6SXNzdWU1OTk3Njc2NzE= | 2 | Issue to read a local dataset | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"My first bug report β€οΈ\r\nLooking into this right now!",
"Ok, there are some news, most good than bad :laughing: \r\n\r\nThe dataset script now became:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def __init__(self, **config):\r\n self.train = config.pop(\"train\", None)\r\n self.validation = config.pop(\"validation\", None)\r\n super(Bbc, self).__init__(**config)\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(builder=self, description=\"bla\", features=nlp.features.FeaturesDict({\"id\": nlp.int32, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": self.train}),\r\n nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": self.validation})]\r\n\r\n def _generate_examples(self, filepath):\r\n with open(filepath) as f:\r\n reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n lines = list(reader)[1:]\r\n\r\n for idx, line in enumerate(lines):\r\n yield idx, {\"id\": idx, \"text\": line[1], \"label\": line[0]}\r\n\r\n```\r\n\r\nAnd the dataset folder becomes:\r\n```\r\n.\r\nβββ bbc\r\nβ βββ bbc.py\r\nβ βββ data\r\nβ βββ test.csv\r\nβ βββ train.csv\r\n```\r\nI can load the dataset by using the keywords arguments like this:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\"})\r\n```\r\n\r\nThat was the good part ^^ Because it took me some time to understand that the script itself is put in cache in `datasets/src/nlp/datasets/some-hash/bbc.py` which is very difficult to discover without checking the source code. It means that doesn't matter the changes you do to your original script it is taken into account. I think instead of doing a hash on the name (I suppose it is the name), a hash on the content of the script itself should be a better solution.\r\n\r\nThen by diving a bit in the code I found the `force_reload` parameter [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L50) but the call of this `load_dataset` method is done with the `builder_kwargs` as seen [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L166) which is ok until the call to the builder is done as the builder do not have this `force_reload` parameter. To show as example, the previous load becomes:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\", \"force_reload\": True})\r\n```\r\nRaises\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 283, in load\r\n dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 170, in builder\r\n builder_instance = builder_cls(**builder_kwargs)\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/datasets/84d638d2a8ca919d1021a554e741766f50679dc6553d5a0612b6094311babd39/bbc.py\", line 12, in __init__\r\n super(Bbc, self).__init__(**config)\r\nTypeError: __init__() got an unexpected keyword argument 'force_reload'\r\n```\r\nSo yes the cache is refreshed with the new script but then raises this error.",
"Ok great, so as discussed today, let's:\r\n- have a main dataset directory inside the lib with sub-directories hashed by the content of the file\r\n- keep a cache for downloading the scripts from S3 for now\r\n- later: add methods to list and clean the local versions of the datasets (and the distant versions on S3 as well)\r\n\r\nSide question: do you often use `builder_kwargs` for other things than supplying file paths? I was thinking about having a more easy to read and remember `data_files` argument maybe.",
"Good plan!\r\n\r\nYes I do use `builder_kwargs` for other things such as:\r\n- dataset name\r\n- properties to know how to properly read a CSV file: do I have to skip the first line in a CSV, which delimiter is used, and the columns ids to use.\r\n- properties to know how to properly read a JSON file: which properties in a JSON object to read",
"Done!"
] | 1,586,888,331,000 | 1,589,223,323,000 | 1,589,223,322,000 | CONTRIBUTOR | null | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwargs):
super(BbcConfig, self).__init__(**kwargs)
class Bbc(nlp.GeneratorBasedBuilder):
_DIR = "./data"
_DEV_FILE = "test.csv"
_TRAINING_FILE = "train.csv"
BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))]
def _info(self):
return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)}
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})]
def _generate_examples(self, filepath):
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"idx": idx, "text": line[1], "label": line[0]}
```
The dataset is attached to this issue as well:
[data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip)
Now the steps to reproduce what I would like to do:
1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible)
2. create the `bbc.py` script as above at the same location than the unziped `data` folder.
Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS:
```python
import nlp
from bbc import Bbc
dataset = nlp.load("bbc")
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset
local_files_only=local_files_only,
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path
if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile
with open(filename, "rb") as fp:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do:
```python
import nlp
dataset = nlp.load("bbc.py")
```
And
```python
import nlp
dataset = nlp.load("./bbc.py")
```
And
```python
import nlp
dataset = nlp.load("/absolute/path/to/bbc.py")
```
These three ways gives me:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset
dataset_module = importlib.import_module(module_path)
File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc'
```
Any idea of what I'm missing? or I might have spot a bug :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1/comments | https://api.github.com/repos/huggingface/datasets/issues/1/events | https://github.com/huggingface/datasets/pull/1 | 599,457,467 | MDExOlB1bGxSZXF1ZXN0NDAzMDk1NDYw | 1 | changing nlp.bool to nlp.bool_ | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,586,859,482,000 | 1,664,875,900,000 | 1,586,865,700,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1/timeline | null | null | 0 | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1",
"html_url": "https://github.com/huggingface/datasets/pull/1",
"diff_url": "https://github.com/huggingface/datasets/pull/1.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1.patch",
"merged_at": "2020-04-14T12:01:40"
} | true |